Numerical Values (numerical + value)

Distribution by Scientific Domains


Selected Abstracts


A numerical study of natural convection in a vertical cylinder bundle

HEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 4 2003
Yuji Isahai
Abstract Natural convection in a bundle of vertical cylinders, arranged in equilateral triangular spacing, has been investigated numerically using a boundary-fitted coordinate system. Numerical calculations for center-to-center distance between cylinders S/D = 1.1 to 1.9, 3.0, 4.0, and 7.0 were made of natural convection of air at modified Grashof numbers Gr* from 10 to 108. Local Nusselt number Nu for uniform wall heat flux indicates the same value at the axial locations except for the thermal entrance region. The region for respective cylinder spacing is noted to diminish with decreasing Grashof number. Numerical values of local Nusselt number Nui are in relatively good agreement with those obtained from the experiment for air. © 2003 Wiley Periodicals, Inc. Heat Trans Asian Res, 32(4): 330,341, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.10095 [source]


Specific interactions in ternary system quaternized polysulfone/mixed solvent

POLYMER ENGINEERING & SCIENCE, Issue 1 2009
Anca Filimon
Theoretical and experimental aspects on the specific interactions developed via electrostatic interactions and hydrogen-bonding in a ternary system formed of a proton-donor solvent (N,N -dimethylformamide or methanol), a proton-acceptor solvent (water), and a quaternized polysulfone with various contents of ionic chlorine, which indicates a proton-acceptor character, are investigated. Thus, the interactions of the ternary systems are corrected on the basis of the association phenomena defined through association constants. Numerical values for these constants were evaluated as a function of the system composition, by mathematical simulations for an accurate adjustment of preferential adsorption, determined by the Flory,Huggins,Pouchly theoretical approach applied to the experimental data. POLYM. ENG. SCI., 2009. © 2008 Society of Plastics Engineers [source]


Quantitative Comparison of Approximate Solution Sets for Bi-criteria Optimization Problems,

DECISION SCIENCES, Issue 1 2003
W. Matthew Carlyle
ABSTRACT We present the Integrated Preference Functional (IPF) for comparing the quality of proposed sets of near-pareto-optimal solutions to bi-criteria optimization problems. Evaluating the quality of such solution sets is one of the key issues in developing and comparing heuristics for multiple objective combinatorial optimization problems. The IPF is a set functional that, given a weight density function provided by a decision maker and a discrete set of solutions for a particular problem, assigns a numerical value to that solution set. This value can be used to compare the quality of different sets of solutions, and therefore provides a robust, quantitative approach for comparing different heuristic, a posteriori solution procedures for difficult multiple objective optimization problems. We provide specific examples of decision maker preference functions and illustrate the calculation of the resulting IPF for specific solution sets and a simple family of combined objectives. [source]


A minimum sample size required from Schmidt hammer measurements

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 13 2009
Tomasz Niedzielski
Abstract The Schmidt hammer is a useful tool applied by geomorphologists to measure rock strength in field conditions. The essence of field application is to obtain a sufficiently large dataset of individual rebound values, which yields a meaningful numerical value of mean strength. Although there is general agreement that a certain minimum sample size is required to proceed with the statistics, the choice of size (i.e. number of individual impacts) was usually intuitive and arbitrary. In this paper we show a simple statistical method, based on the two-sample Student's t -test, to objectively estimate the minimum number of rebound measurements. We present the results as (1) the ,mean' and ,median' solutions, each providing a single estimate value, and (2) the empirical probability distribution of such estimates based on many field samples. Schmidt hammer data for 14 lithologies, 13,81 samples for each, with each sample consisting of 40 individual readings, have been evaluated, assuming different significance levels. The principal recommendations are: (1) the recommended minimum sample size for weak and moderately strong rock is 25; (2) a sample size of 15 is sufficient for sandstones and shales; (3) strong and coarse rocks require 30 readings at a site; (4) the minimum sample size may be reduced by one-third if the context of research allows for higher significance level for test statistics. Interpretations based on less than 10 readings from a site should definitely be avoided. Copyright © 2009 John Wiley & Sons, Ltd. [source]


The perturbation method and the extended finite element method.

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 8 2006
An application to fracture mechanics problems
ABSTRACT The extended finite element method has been successful in the numerical simulation of fracture mechanics problems. With this methodology, different to the conventional finite element method, discretization of the domain with a mesh adapted to the geometry of the discontinuity is not required. On the other hand, in traditional fracture mechanics all variables have been considered to be deterministic (uniquely defined by a given numerical value). However, the uncertainty associated with these variables (external loads, geometry and material properties, among others) it is well known. This paper presents a novel application of the perturbation method along with the extended finite element method to treat these uncertainties. The methodology has been implemented in a commercial software and results are compared with those obtained by means of a Monte Carlo simulation. [source]


A critique of the World Health Organisation's evaluation of health system performance

HEALTH ECONOMICS, Issue 5 2003
Jeff Richardson
Abstract The World Health Organisation's (WHO) approach to the measurement of health system efficiency is briefly described. Four arguments are then presented. First, equity of finance should not be a criterion for the evaluation of a health system and, more generally, the same objectives and importance weights should not be imposed upon all countries. Secondly, the numerical value of the importance weights do not reflect their true importance in the country rankings. Thirdly, the model for combining the different objectives into a single index of system performance is problematical and alternative models are shown to alter system rankings. The WHO statistical analysis is replicated and used to support the fourth argument which is that, contrary to the author's assertion, their methods cannot separate true inefficiency from random error. The procedure is also subject to omitted variable bias. The econometric model for all countries has very poor predictive power for the subset of OECD countries and it is outperformed by two simpler algorithms. Country rankings based upon the model are correspondingly unreliable. It is concluded that, despite these problems, the study is a landmark in the evolution of system evaluation, but one which requires significant revision. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Implementing a pre-operative checklist to increase patient safety: a 1-year follow-up of personnel attitudes

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 2 2010
L. NILSSON
Background: The operating room is a complex work environment with a high potential for adverse events. Protocols for perioperative verification processes have increasingly been recommended by professional organizations during the last few years. We assessed personnel attitudes to a pre-operative checklist (,time out') immediately before start of the operative procedure. Methods: ,Time out' was implemented in December 2007 as an additional safety barrier in two Swedish hospitals. One year later, in order to assess how the checklist was perceived, a questionnaire was sent by e-mail to 704 persons in the operating departments, including surgeons, anesthesiologists, operation and anesthetic nurses and nurse assistants. In order to identify differences in response between professions, each alternative in the questionnaire was assigned a numerical value. Results: The questionnaire was answered by 331 (47%) persons and 93% responded that ,time out' contributes to increased patient safety. Eighty-six percent thought that ,time out' gave an opportunity to identify and solve problems. Confirmation of patient identity, correct procedure, correct side and checking of allergies or contagious diseases were considered ,very important' by 78,84% of the responders. Attitudes to checking of patient positioning, allergies and review of potential critical moments were positive but differed significantly between the professions. Attitudes to a similar checklist at the end of surgery were positive and 72,99% agreed to the different elements. Conclusion: Staff attitudes toward a surgical checklist were mostly positive 1 year after their introduction in two large hospitals in central Sweden. [source]


Stability of recombinant plasmids on the continuous culture of Bifidobacterium animalis ATCC 27536

BIOTECHNOLOGY & BIOENGINEERING, Issue 2 2003
Antonio González Vara
Abstract Bifidobacterium animalis ATCC 27536 represents among bifidobacteria a host-model for cloning experiments. The segregational and structural stabilities of a family of cloning vectors with different molecular weights but sharing a common core were studied in continuous fermentation of the hosting B. animalis without selective pressure. The rate of plasmid loss (R) and the specific growth rate difference (,,) between plasmid-free and plasmid-carrying cells were calculated for each plasmid and their relationship with plasmid size was studied. It was observed that both R and the numerical value of ,, increased exponentially with plasmid size. The exponential functions correlating the specific growth rate difference and the rate of plasmid loss with the plasmid molecular weight were determined. Furthermore, the smallest of the plasmids studied, pLAV (4.3-kb) was thoroughly characterized by means of its complete nucleotide sequence. It was found that it contained an extra DNA fragment, the first bifidobacterial insertion sequence characterised, named IS 1999. © 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 84: 145,150, 2003. [source]


Exercise test mode dependency for ventilatory efficiency in women but not men

CLINICAL PHYSIOLOGY AND FUNCTIONAL IMAGING, Issue 2 2006
James A. Davis
Summary Ventilatory efficiency is commonly defined as the level of ventilation V,E at a given carbon dioxide output (V,CO2). The slope of the V,E versus V,CO2 relationship and the lowest V,E/V,CO2 are two ventilatory efficiency indices that can be measured during cardiopulmonary exercise testing (CPET). A possible CPET mode dependency for these indices was evaluated in healthy men and women. Also evaluated was the relationship between these two indices as, in theory, V,E/V,CO2 falls hyperbolically towards an asymptote that numerically equals the V,E versus V,CO2 slope at exercise levels below the ones that cause respiratory compensation for metabolic acidosis. Twenty-eight healthy subjects (14 men) underwent treadmill and cycle ergometer CPET on different days. Ventilation and the gas fractions for oxygen and CO2 were measured with a vacumed metabolic cart. In men, paired t -test analysis failed to find a mode difference for either ventilatory efficiency index but the opposite was true in the women as each woman had higher values for both indices on the treadmill. For men, the lowest V,E/V,CO2 was larger than the V,E versus V,CO2 slope by 1·3 on the treadmill and 0·8 on the cycle ergometer. The corresponding values for women were 1·7 and 1·4. We conclude that in healthy subjects, women, but not men, demonstrate a mode dependency for the two ventilatory efficiency indices investigated in this study. Furthermore, our results are consistent with the theoretical expectation that the lowest V,E/V,CO2 has a numerical value just above the asymptote of the V,E/V,CO2 versus V,CO2 relationship. [source]


Defining and measuring braiding intensity

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 14 2008
Roey Egozi
Abstract Geomorphological studies of braided rivers still lack a consistent measurement of the complexity of the braided pattern. Several simple indices have been proposed and two (channel count and total sinuosity) are the most commonly applied. For none of these indices has there been an assessment of the sampling requirements and there has been no systematic study of the equivalence of the indices to each other and their sensitivity to river stage. Resolution of these issues is essential for progress in studies of braided morphology and dynamics at the scale of the channel network. A series of experiments was run using small-scale physical models of braided rivers in a 3 m , 20 m flume. Sampling criteria for braid indices and their comparability were assessed using constant-discharge experiments. Sample hydrographs were run to assess the effect of flow variability. Reach lengths of at least 10 times the average wetted width are needed to measure braid indices with precision of the order of 20% of the mean. Inherent variability in channel pattern makes it difficult to achieve greater precision. Channel count indices need a minimum of 10 cross-sections spaced no further apart than the average wetted width of the river. Several of the braid indices, including total sinuosity, give very similar numerical values but they differ substantially from channel-count index values. Consequently, functional relationships between channel pattern and, for example, discharge, are sensitive to the choice of braid index. Braid indices are sensitive to river stage and the highest values typically occur below peak flows of a diurnal (melt-water) hydrograph in pro-glacial rivers. There is no general relationship with stage that would allow data from rivers at different relative stage to be compared. At present, channel count indices give the best combination of rapid measurement, precision, and range of sources from which measurements can be reliably made. They can also be related directly to bar theory for braided pattern development. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A study of uniform approximation of equivalent permittivity for index-modulated gratings

ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 11 2008
Shota Sugano
Abstract It is well known that surface-relief dielectric gratings with rectangular profile can be treated by uniform approximation of the equivalent permittivity when the periodicity is very small compared with the wavelength. In optics, this phenomenon is the equivalent anisotropic effects or the form birefringence. When the periodicity is very small, the equivalent anisotropic effects will be shown in index-modulated gratings. In this paper, the uniform approximation is described for the electromagnetic scattering problem of index-modulated gratings. The scattering properties of dielectric slabs are calculated by transmission-line theory and the equivalent permittivity obtained from our proposed formulation of the uniform approximation. Scattering by index-modulated gratings is analyzed rigorously by matrix eigenvalue calculations using the Fourier expansion method and spatial harmonics expansions. When the periodicity is very small, the results are in good agreement. By investigating the difference between the equivalent permittivity and the numerical values corresponding to the permittivity of the index-modulated gratings, the conditions of applicability of the uniform approximation are shown. The equivalent anisotropic effects of various profiles are compared. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 91(11): 28,36, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10181 [source]


Evaluation of water quality using acceptance sampling by variables

ENVIRONMETRICS, Issue 4 2003
Eric P. Smith
Abstract Under section 303(d) of the Clean Water Act, states must identify water segments where loads of pollutants are violating numeric water quality standards. Consequences of misidentification are quite important. A decision that water quality is impaired initiates the total maximum daily load or TMDL planning requirement. Falsely concluding that a water segment is impaired results in unnecessary TMDL planning and pollution control implementation costs. On the other hand, falsely concluding that a segment is not impaired may pose a risk to human health or to the services of the aquatic environment. Because of the consequences, a method is desired that minimizes or controls the error rates. The most commonly applied approach is to use the Environmental Protection Agency (EPA)'s raw score approach in which a stream segment is listed as impaired when greater than 10 per cent of the measurements of water quality conditions exceed a numeric criteria. An alternative to the EPA approach is the binomial test that the proportion exceeding the standard is 0.10 or less. This approach uses the number of samples exceeding the criteria as a test statistic along with the binomial distribution for evaluation and estimation of error rates. Both approaches treat measurements as binary; the values either exceed or do not exceed the standard. An alternative approach is to use the actual numerical values to evaluate standard. This method is referred to as variables acceptance sampling in quality control literature. The methods are compared on the basis of error rates. If certain assumptions are met then the variables acceptance method is superior in the sense that the variables acceptance method requires smaller sample sizes to achieve the same error rates as the raw score method or the binomial method. Issues associated with potential problems with environmental measurements and adjustments for their effects are discussed. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Changes in the Ability to Detect Ordinal Numerical Relationships Between 9 and 11 Months of Age

INFANCY, Issue 4 2008
Sumarga H. Suanda
When are the precursors of ordinal numerical knowledge first evident in infancy? Brannon (2002) argued that by 11 months of age, infants possess the ability to appreciate the greater than and less than relations between numerical values but that this ability experiences a sudden onset between 9 and 11 months of age. Here we present 5 experiments that explore the changes that take place between 9 and 11 months of age in infants' ability to detect reversals in the ordinal direction of a sequence of arrays. In Experiment 1, we replicate the finding that 11- but not 9-month-old infants detect a numerical ordinal reversal. In Experiment 2 we rule out an alternative hypothesis that 11-month-old infants attended to changes in the absolute numerosity of the first stimulus in the sequence rather than a reversal in ordinal direction. In Experiment 3, we demonstrate that 9-month-old infants are not aided by additional exposure to each numerosity stimulus in a sequence. In Experiment 4 we find that 11-month-old but not 9-month-old infants succeed at detecting the reversal in a nonnumerical size or area-based rule, casting doubt on Brannon's prior claim that what develops between 9 and 11 months of age is a specifically numerical ability. In Experiment 5 we demonstrate that 9-month-old infants are capable of detecting a reversal in ordinal direction but only when there are multiple converging cues to ordinality. Collectively these data indicate that at 11 months of age infants can represent ordinal relations that are based on number, size, or cumulative area, whereas at 9 months of age infants are unable to use any of these dimensions in isolation but instead require a confluence of cues. [source]


Incorporating linguistic, probabilistic, and possibilistic information in a risk-based approach for ranking contaminated sites

INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 4 2010
Kejiang Zhang
Abstract Different types of uncertain information,linguistic, probabilistic, and possibilistic,exist in site characterization. Their representation and propagation significantly influence the management of contaminated sites. In the absence of a framework with which to properly represent and integrate these quantitative and qualitative inputs together, decision makers cannot fully take advantage of the available and necessary information to identify all the plausible alternatives. A systematic methodology was developed in the present work to incorporate linguistic, probabilistic, and possibilistic information into the Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), a subgroup of Multi-Criteria Decision Analysis (MCDA) methods for ranking contaminated sites. The identification of criteria based on the paradigm of comparative risk assessment provides a rationale for risk-based prioritization. Uncertain linguistic, probabilistic, and possibilistic information identified in characterizing contaminated sites can be properly represented as numerical values, intervals, probability distributions, and fuzzy sets or possibility distributions, and linguistic variables according to their nature. These different kinds of representation are first transformed into a 2-tuple linguistic representation domain. The propagation of hybrid uncertainties is then carried out in the same domain. This methodology can use the original site information directly as much as possible. The case study shows that this systematic methodology provides more reasonable results. Integr Environ Assess Manag 2010;6:711,724. © 2010 SETAC [source]


Dynamics of energy storage in phase change drywall systems

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 4 2005
K. Darkwa
Abstract Experimental evaluations of manufactured samples of laminated and randomly mixed phase change material (PCM) drywalls have been carried out and compared with numerical results. The analysis showed that the laminated PCM drywall performed thermally better. Even though there was a maximum 3% deviation of the average experimental result from the numerical values, the laminated PCM board achieved about 55% of the phase change process as against 48% for the randomly distributed drywall sample. The laminated board sample also released about 27% more latent heat than the randomly distributed type at the optimum time of 90 min thus validating previous simulation study. Given the experimental conditions and assumptions the experiment has proved that it is feasible to develop the laminated PCM technique for enhancing and minimising multi-dimensional heat transfers in drywall systems. Further practical developments are however encouraged to improve the overall level of heat transfer. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Determination of Weibull parameters for wind energy analysis of ,zmir, Turkey

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2002
K. Ulgen
Abstract In this study, the two Weibull parameters of the wind speed distribution function, the shape parameter k (dimensionless) and the scale parameter c (ms,1), were computed from the wind speed data for ,zmir. Wind data, consisting of hourly wind speed records over a 5-year period, 1995,1999, were measured in the Solar/Wind-Meteorological Station of the Solar Energy Institute at Ege University. Based on the experimental data, it was found that the numerical values of both Weibull parameters (k and c) for ,zmir vary over a wide range. The yearly values of k range from 1.378 to 1.634 with a mean value of 1.552, while those of c are in the range of 2.956,3.444 with a mean value of 3.222. The average seasonal Weibull distributions for ,zmir are also given. The wind speed distributions are represented by Weibull distribution and also by Rayleigh distribution, with a special case of the Weibull distribution for k=2. As a result, the Weibull distribution is found to be suitable to represent the actual probability of wind speed data for ,zmir (at annual average wind speeds up to 3 ms,1). Copyright © 2002 John Wiley & Sons, Ltd. [source]


Decreasing myelin density reflected increasing white matter pathology in Alzheimer's disease,a neuropathological study

INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 10 2005
Martin Sjöbeck
Abstract Background White matter disease (WMD) is frequently seen in Alzheimer's disease (AD) at neuropathological examination. It is defined as a subtotal tissue loss with a reduction of myelin, axons and oligodendrocytes as well as astrocytosis. Studies quantitatively defining the myelin loss in AD are scarce. The aim was to develop a method that could provide numerical values of myelin density in AD. The purpose was to compare the myelin contents in increasing grades of pathology of WMD, with age and cortical AD pathology as well as in different regions of the brain in AD. Material and methods Sixteen cases with AD and concomitant WMD were investigated with an in-house developed image analysis technique to determine the myelin attenuation with optical density (OD) in frontoparietal, parietal, temporal and occipital white matter on whole brain coronal sections stained for myelin with Luxol Fast Blue (LFB). The OD values in LFB were compared grouped according to Haematoxylin/Eosin (HE) evaluated mild, moderate and severe WMD or normal tissue. The OD values were also correlated with age and cortical AD pathology and compared between the different studied white matter regions. Results Increasing severity of WMD was associated with a statistically significant OD reduction. No correlation was seen between age and OD or overall cortical AD pathology. The OD values were significantly lower in frontoparietal-compared to occipital white matter. Conclusions Myelin loss in AD with WMD is a marked morphologic component of the disease and it is possible to determine the reduction objectively in neuropathological specimens with quantitative measures. This may be of use for clinical diagnostics including brain imaging. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Kinetic model for noncatalytic supercritical water gasification of cellulose and lignin

AICHE JOURNAL, Issue 9 2010
Fernando L. P. Resende
Abstract This article reports the first kinetics model for Supercritical Water Gasification (SCWG) that describes the formation and interconversion of individual gaseous species. The model comprises 11 reactions, and it uses a lumping scheme to handle the large number of intermediate compounds. We determined numerical values for the rate constants in the model by fitting it to experimental data previously reported for SCWG of cellulose and lignin. We validated the model by showing that it accurately predicts gas yields at biomass loadings and water densities not used in the parameter estimation. Sensitivity analysis and reaction rate analysis indicate that steam-reforming and water,gas shift are the main sources of H2 in SCWG, and intermediate species are the main sources of CO, CO2, and CH4. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


New approximate formula for the generalized temperature integral

AICHE JOURNAL, Issue 7 2009
Haixiang Chen
Abstract The generalized temperature integral frequently occurs in nonisothermal kinetic analysis. This article has proposed a new approximate formula for the generalized temperature integral, which is in the following form: For commonly used values of m in kinetic analysis, the deviation of the new approximation from the numerical values of the integral is within 0.4%. More importantly, the new formula represents the exponential approximation, which is not found earlier, and it can result in a new and very accurate integral method in kinetic analysis. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


Prediction of gas chromatographic retention times of capillary columns of different inside diameters

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 18 2003
Kornkanok Aryusuk
Abstract The retention times (t R) of n -alkanes (C16,C22) eluted from capillary columns of different diameters are accurately predicted by using the equation proposed by Krisnangkura et al. (J. Chromatogr. Sci.1997, 35, 329,332). The numerical values of four thermodynamically related constants (a, b, c, and d) of the BP-1 (100% dimethylpolysiloxane) capillary column (0.32 mm ID×25 m, film thickness, 0.25 ,m) are ,6.169, ,0.512, 226.98, and 410.30, respectively. For columns of the same stationary phase but of different inside diameters and film thickness, accurate tR values can be predicted by using the same numerical values of the last three constants but the first constant (a) is changed by the difference in the natural logarithm of the column phase ratios (,). All the derived numerical values of each column were tested with FAMEs and with n -alkanes in temperature-programmed GC (TPGC). All the predicted tR values agree well with the experimental values. About 77% of the TPGC data have errors lower than 0.5% and the largest value is ,1.04%. [source]


Statistical determination of diagnostic species for site groups of unequal size

JOURNAL OF VEGETATION SCIENCE, Issue 6 2006
Lubomír Tichy
Abstract Aim: Concentration of species occurrences in groups of classified sites can be quantified with statistical measures of fidelity, which can be used for the determination of diagnostic species. However, for most available measures fidelity depends on the number of sites within individual groups. As the classified data sets typically contain site groups of unequal size, such measures do not enable a comparison of numerical fidelity values of species between different site groups. We therefore propose a new method of measuring fidelity with presence/absence data after equalization of the size of the site groups. We compare the properties of this new method with other measures of statistical fidelity, in particular with the Dufrêne-Legendre Indicator Value (IndVal) index. Methods: The size of site groups in the data set is equalized, while relative frequencies of species occurrence within and outside of these groups are kept constant. Then fidelity is calculated using the phi coefficient of association. Results: Fidelity values after equalization are independent of site group size, but their numerical values vary independently of the statistical significance of fidelity. By changing the size of the target site group relative to the size of the entire data set, the fidelity measure can be made more sensitive to either common or rare species. We show that there are two modifications of the IndVal index for presence/absence data, one of which is also independent of the size of site groups. Conclusion: The phi coefficient applied to site groups of equalized size has advantages over other statistical measures of fidelity based on presence/absence data. Its properties are close to an intuitive understanding of fidelity and diagnostic species in vegetation science. Statistical significance can be checked by calculation of another fidelity measure that is a function of statistical significance, or by direct calculation of the probability of observed species concentrations by Fisher's exact test. An advantage of the new method over IndVal is its ability to distinguish between positive and negative fidelity. One can also weight the relative importance of common and rare species by changing the equalized size of the site groups. [source]


Principles of pharmacodynamics and their applications in veterinary pharmacology

JOURNAL OF VETERINARY PHARMACOLOGY & THERAPEUTICS, Issue 6 2004
P. LEES
Pharmacodynamics (PDs) is the science of drug action on the body or on microorganisms and other parasites within or on the body. It may be studied at many organizational levels , sub-molecular, molecular, cellular, tissue/organ and whole body , using in vivo, ex vivo and in vitro methods and utilizing a wide range of techniques. A few drugs owe their PD properties to some physico-chemical property or action and, in such cases, detailed molecular drug structure plays little or no role in the response elicited. For the great majority of drugs, however, action on the body is crucially dependent on chemical structure, so that a very small change, e.g. substitution of a proton by a methyl group, can markedly alter the potency of the drug, even to the point of loss of activity. In the late 19th century and first half of the 20th century recognition of these facts by Langley, Ehrlich, Dale, Clarke and others provided the foundation for the receptor site hypothesis of drug action. According to these early ideas the drug, in order to elicit its effect, had to first combine with a specific ,target molecule' on either the cell surface or an intracellular organelle. It was soon realized that the ,right' chemical structure was required for drug,target site interaction (and the subsequent pharmacological response). In addition, from this requirement, for specificity of chemical structure requirement, developed not only the modern science of pharmacology but also that of toxicology. In relation to drug actions on microbes and parasites, for example, the early work of Ehrlich led to the introduction of molecules selectively toxic for them and relatively safe for the animal host. In the whole animal drugs may act on many target molecules in many tissues. These actions may lead to primary responses which, in turn, may induce secondary responses, that may either enhance or diminish the primary response. Therefore, it is common to investigate drug pharmacodynamics (PDs) in the first instance at molecular, cellular and tissue levels in vitro, so that the primary effects can be better understood without interference from the complexities involved in whole animal studies. When a drug, hormone or neurotransmitter combines with a target molecule, it is described as a ligand. Ligands are classified into two groups, agonists (which initiate a chain of reactions leading, usually via the release or formation of secondary messengers, to the response) and antagonists (which fail to initiate the transduction pathways but nevertheless compete with agonists for occupancy of receptor sites and thereby inhibit their actions). The parameters which characterize drug receptor interaction are affinity, efficacy, potency and sensitivity, each of which can be elucidated quantitatively for a particular drug acting on a particular receptor in a particular tissue. The most fundamental objective of PDs is to use the derived numerical values for these parameters to classify and sub-classify receptors and to compare and classify drugs on the basis of their affinity, efficacy, potency and sensitivity. This review introduces and summarizes the principles of PDs and illustrates them with examples drawn from both basic and veterinary pharmacology. Drugs acting on adrenoceptors and cardiovascular, non-steroidal anti-inflammatory and antimicrobial drugs are considered briefly to provide a foundation for subsequent reviews in this issue which deal with pharmacokinetic (PK),PD modelling and integration of these drug classes. Drug action on receptors has many features in common with enzyme kinetics and gas adsorption onto surfaces, as defined by Michaelis,Menten and Langmuir absorption equations, respectively. These and other derived equations are outlined in this review. There is, however, no single theory which adequately explains all aspects of drug,receptor interaction. The early ,occupation' and ,rate' theories each explain some, but not all, experimental observations. From these basic theories the operational model and the two-state theory have been developed. For a discussion of more advanced theories see Kenakin (1997). [source]


Free Rotational Diffusion of Rigid Particles with Arbitrary Surface Topography: A Brownian Dynamics Study Using Eulerian Angles

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 2-3 2008
Tom Richard Evensen
Abstract Rotational diffusion of rigid bodies is an important topic that has attracted sustained interest for many decades, but most existing studies are limited to particles with simple symmetries. Here, we present a simple Brownian dynamics algorithm that can be used to study the free rotational diffusion of rigid particles with arbitrary surface topography. The main difference between the new algorithm and previous algorithms is how the numerical values of the mobility tensor are calculated. The only parameters in the numerical algorithm that depend on particle shape are the principal values of the particle rotational mobility tensor. These three scalars contain all information about the surface topography that is relevant for the particle rotational diffusion. Because these principal values only need to be pre-calculated once, the resulting general algorithm is highly efficient. The algorithm is valid for arbitrary mass density distribution throughout the rigid body. In this paper, we use Eulerian angles as the generalized coordinates describing the particle angular orientation. [source]


The prescribed duration algorithm: utilising ,free text' from multiple primary care electronic systems

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 9 2010
Caroline J. Brooks
Abstract Purpose To develop and test an algorithm that translates total dose and daily regimen, inputted as ,free text' on a prescription, into numerical values to calculate the prescribed treatment duration. Method The algorithm was developed using antibiotic prescriptions (n,=,711,714) from multiple primary care computer systems. For validation, the prescribed treatment duration of an independent sample of antibiotic scripts was calculated in two ways: (a) computer algorithm, (b) manually reviewed by a researcher blinded to the results of (a). The outputs of the two methods were compared and the level of agreement assessed, using confidence intervals for differences in proportions. This was repeated on sample of antidepressant scripts to test generalisabilty of the algorithm. Results For the antibiotic prescriptions, the algorithm processed 98.5% with an accuracy of 99.8% and the manual review processed 98.5% with 98.9% accuracy. The differences between these proportions are 0.0% (95%CI of ,0.9, 0.9%) and 1.0% (95%CI of ,0.1, 2.3%), respectively. For the antidepressant prescriptions, the algorithm processed 91.5% with an accuracy of 96.6% compared to the manual review with 96.4% processed and 99.8% accuracy; difference between these proportions is 4.9% (95%CI of 2.0, 8.0%) and 3.2% (95%CI of 1.6, 5.3%), respectively. Conclusion The algorithm proved to be applicable and efficient for assessing prescribed duration, with sensitivity and specificity values close to the manual review, but with the added advantage that the computer can process large volume of scripts rapidly and automatically. Copyright © 2010 John Wiley & Sons, Ltd. [source]


In vitro quantification of barley reaction to leaf stripe

PLANT BREEDING, Issue 5 2003
M. I. E. Arabi
Abstract An in vitro technique was used to quantify the infection level of leaf stripe in barley caused by Pyrenophora graminea. This pathogen penetrates rapidly through subcrown internodes during seed germination of susceptible cultivars. Quantification was based on the percentage of the pieces of subcrown internodes that produced fungal hyphae cultured on potato dextrose agar media. The disease severity was evaluated among five cultivars with different infection levels and numerical values for each cultivar were obtained. A significant correlation coefficient (r = 0.91, P < 0.02) was found among the in vitro and field assessments. In addition, the results were highly correlated (r = 0.94, P < 0.01) among the different in vitro experiments, indicating that this testing procedure is reliable. The method presented facilitates a rapid preselection under uniform conditions which is of importance from a breeder's point of view. Significant differences (P < 0.001) were found for the length of subcrown internodes between inoculated and non-inoculated plants with leaf stripe. Isolate SY3 was the most effective in reducing the subcrown internode length for all genotypes. [source]


In vitro quantification of the reaction of barley to common root rot

PLANT BREEDING, Issue 5 2001
M. I. Arabi
Abstract An in vitro technique was used to quantify the infection level of common root rot. This disease produces a brown to black discoloration of the subcrown internodes of barley. Quantification was based on the percentage of germinated infected pieces (1.5 mm) of subcrown internodes cultured on potato dextrose agar media. The disease severity was apparent among four different visually classified categories and numerical values for each category were applied. The results were highly correlated (r = 0.97, P < 0.01) among the different in vitro experiments, indicating that this testing procedure is repeatable. Highly significant differences (P < 0.001) were found for the length of first leaf and fresh weight between plants inoculated and uninoculated with common root rot. However, the effect of inoculation on fresh weight only differed significantly (P < 0.02) among the genotypes. [source]


Measurement of Outcome: A Proposed Scheme

THE MILBANK QUARTERLY, Issue 4 2005
BARBARA STARFIELD
The need to demonstrate that health care has an influence on health status is increasingly pressing. Such demonstrations require tools of measurement which are unfortunately not available. Development of instruments has been hampered by a lack of consensus on appropriate frames of reference, and there appears to be little agreement on what should be measured and what relative importance should be ascribed to different dimensions of health status. An approach that does not require the assignment of numerical values or weights to various aspects of health status and is applicable to all age groups within the population and to the whole spectrum of health problems rather than to specific medical diagnoses would seem desirable. A scheme that is based upon the development of a "profile" rather than a single "index" for describing health status is proposed in this paper. The model is a conceptual framework whose usefulness will depend upon efforts of a large number of researchers from many disciplines to develop instruments which can be incorporated in it. Although the problems in development of the scheme are complex, I hope that it will focus attention on the relevant dimensions and facilitate improved coordination of efforts to produce ways to demonstrate what health care contributes to health. [source]


On the optimization of bond-valence parameters: artifacts conceal chemical effects

ACTA CRYSTALLOGRAPHICA SECTION B, Issue 1 2009
Xiqu Wang
We recently proposed that calculated bond-valence sums, BVS, represent a non-integer structural valence, `structV', rather than the integer-value stoichiometric valence, stoichV. Therefore, the usual attempts to `optimize' bond-valence parameters r0 and b by adjusting them to stoichV are based on the false assumption that numerical values of structV and stoichV are always equal. Bond-valence calculations for several compounds with stereoactive cations SnII, SbIII and BiIII reveal the balanced distribution of the bonding power structV between atoms of each structure. [source]


Usefulness of Nonlinear Analysis of ECG Signals for Prediction of Inducibility of Sustained Ventricular Tachycardia by Programmed Ventricular Stimulation in Patients with Complex Spontaneous Ventricular Arrhythmias

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2008
Ornella Durin M.D.
Introduction: The aim of our study was to assess the effectiveness of the nonlinear analysis (NLA) of ECG in predicting the results of invasive electrophysiologic study (EPS) in patients with ventricular arrhythmias. Methods: We evaluated 25 patients with history of cardiac arrest, syncope, sustained, or nonsustained ventricular tachycardia (VT). All patients underwent electrophysiologic study (EPS) and nonlinear analysis (NLA) of ECG. The study group was compared with a control group of 25 healthy subjects, in order to define the normal range of NLA. ECG was processed in order to obtain numerical values, which were analyzed by nonlinear mathematical functions. Patients were classified through the application of a clustering procedure to the whole set of functions, and the correlation between the results of nonlinear analysis of ECG and EPS was tested. Results: NLA assigned all patients with negative EPS to the same class of healthy subjects, whereas the patients in whom VT was inducible had been correctly and clearly isolated into a separate cluster. In our study, the result of NLA with application of the clustering technique was significantly correlated to that of EPS (P < 0.001), and was able to predict the result of EPS, with a negative predictive value of 100% and a positive predictive value of 100%. Conclusions: NLA can predict the results of EPS with good negative and positive predictive value. However, further studies are needed in order to verify the usefulness of this noninvasive tool for sudden death risk stratification in patients with ventricular arrhythmias. [source]


Communicating breast cancer treatment complication risks: When words are likely to fail

ASIA-PACIFIC JOURNAL OF CLINICAL ONCOLOGY, Issue 3 2009
Peter H GRAHAM
Abstract Aim: The aim of the present study was to describe women's preferences for the quantification of the risk of a serious complication after regional nodal radiotherapy for breast cancer and women's interpretation of a range of descriptive terms. Methods: A cross-sectional survey was conducted to elicit risk expression preferences and interpretation of words commonly used to describe the risk or frequency of a complication. Two hundred and sixty-two women who had experienced breast-only radiotherapy for early breast cancer at a Sydney teaching hospital were recruited for the survey. Results: The most preferred single method of expression of a risk is descriptive words, for example "uncommon" (52%), followed by percentages (27%) and numbers, for example 1 in 100 (21%). Lower education levels, more advanced cancer stage and older age increase the preference for descriptive words. When considering a serious complication of treatment, such as loss of the function of an arm, the modal interpretation of the descriptors "sometimes" was 1/100 (36% of women), "uncommon" was 1/1000 (35%), "very uncommon" was 1/10 000 (40%), "rare" was 1/10 000 (58%) and "very rare" was 1/10 000 (51%). However, the range of interpretations and the consistent assignment of extremely low frequencies of risk generally render descriptive words without numerical quantification inadequate for informed consent. Conclusion: Although risks of side-effects are often described in words such as common, uncommon and rare, qualification should be provided with numerical values to ensure better understanding of risk. [source]