Home About us Contact | |||
Method Used (method + used)
Selected AbstractsResolving Deadlock: Why International Organisations Introduce Soft LawEUROPEAN LAW JOURNAL, Issue 2 2006Armin Schäfer Instead the EU relies on soft law that does not legally bind governments in the same way as the Community Method used to. The literature assumes that soft law is chosen to achieve common objectives given considerable diversity among the Member States. In contrast, this paper suggests that non-binding coordination is first and foremost a means to foster compromises in the absence of substantial agreements. Three case studies demonstrate that international organisations have repeatedly relied on soft law to overcome disagreements among their members. The IMF, the OECD, and the EU introduced soft coordination at times of institutional crisis to prevent a breakdown of negotiations. [source] Cannabis Use and Sexual HealthTHE JOURNAL OF SEXUAL MEDICINE, Issue 2pt1 2010Anthony M.A. Smith PhD ABSTRACT Introduction., Cannabis is the most commonly used illicit substance worldwide. Despite this, its impact on sexual health is largely unknown. Aim., The aim of this article is to examine the association between cannabis use and a range of sexual health outcomes. Main Outcome Measures., The main outcome measures include the number of sexual partners in the past year, condom use at most recent vaginal or anal intercourse, diagnosis with a sexually transmissible infection in the previous year, and the occurrence of sexual problems. Methods., Method used in this article includes a computer-assisted telephone survey of 8,656 Australians aged 16,64 years resident in Australian households with a fixed telephone line. Results., Of the 8,650 who answered the questions about cannabis use, 754 (8.7%) reported cannabis use in the previous year with 126 (1.5%) reporting daily use, 126 reported (1.5%) weekly use, and 502 (5.8%) reported use less often than weekly. After adjusting for demographic factors, daily cannabis use compared with no use was associated with an increased likelihood of reporting two or more sexual partners in the previous year in both men (adjusted odds ratio 2.08, 95% confidence interval 1.11,3.89; P = 0.02) and women (2.58, 1.08,6.18; P = 0.03). Daily cannabis use was associated with reporting a diagnosis of a sexually transmissible infection in women but not men (7.19, 1.28,40.31; P = 0.02 and 1.45, 0.17,12.42; P = 0.74, respectively). Frequency of cannabis use was unrelated to sexual problems in women but daily use vs. no use was associated with increased reporting among men of an inability to reach orgasm (3.94, 1.71,9.07; P < 0.01), reaching orgasm too quickly (2.68, 1.41,5.08; P < 0.01), and too slowly (2.05, 1.02,4.12; P = 0.04). Conclusions., Frequent cannabis use is associated with higher numbers of sexual partners for both men and women, and difficulties in men's ability to orgasm as desired. Smith AMA, Ferris JA, Simpson JM, Shelley J, Pitts M, and Richters J. Cannabis use and sexual health. J Sex Med 2010;7:787,793. [source] Introductory quantum physics courses using a LabVIEW multimedia moduleCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2007Ismael Orquķn Abstract We present the development of a LabVIEW multimedia module for introductory Quantum Physics courses and our experience in the use of this application as an educational tool in learning methodologies. The program solves the time-dependent Schrödinger equation (TDSE) for arbitrary potentials. We describe the numerical method used for solving this equation, as well as some mathematical tools employed to reduce the calculation time and to obtain more accurate results. As an illustration, we present the evolution of a wave packet for three different potentials: the repulsive barrier potential, the repulsive step potential, and the harmonic oscillator. This application has been successfully integrated in the learning strategies of the course Quantum Physics for Engineering at the Polytechnic University of Valencia, Spain. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ. 15: 124,133, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20100 [source] WILLINGNESS-TO-PAY FOR CRIME CONTROL PROGRAMS,CRIMINOLOGY, Issue 1 2004MARK A. COHEN This paper reports on a new methodology to estimate the "cost of crime." It is adapted from the contingent valuation method used in the environmental economics literature and is itself used to estimate the public's willingness to pay for crime control programs. In a nationally representative sample of 1,300 U.S. residents, we found that the typical household would be willing to pay between $100 and $150 per year for programs that reduced specific crimes by 10 percent in their communities. This willingness amounts, collectively, to approximately $25,000 per burglary, $70,000 per serious assault, $232,000 per armed [source] P-26 CONVENTIONAL V THIN LAYER TECHNIQUES: A COMPARATIVE STUDY OF BRONCHIAL SPECIMENS USING CONVENTIONAL AND TWO LBC METHODS, THINPREP AND SUREPATHCYTOPATHOLOGY, Issue 2006J. L. Conachan The current, conventional method used is quick, easy and reasonably cheap but the nature of bronchial specimens themselves creates the need for a better preparation technique. Bronchial specimens often present with many obscuring features, such as blood and mucus, which can affect definitive diagnosis. In the study, the bronchial specimens underwent routine conventional preparation and that remaining was used to prepare an LBC slide. Both LBC methods were separately evaluated alongside the conventional method where, of the 44 specimens used, half were prepared using the conventional and ThinPrep and half with the conventional and SurePath. Evaluation forms were completed by pathologists who assessed all preparations. The results showed both LBC methods produced superior preparations that were better fixed, more cellular and had improved nuclear detail. They also removed a high percentage of background debris, were more diagnostically accurate and reduced the inadequate rate by a third. The conventional slides prepared from the same specimen as the SurePath had a lower average than those prepared with the ThinPrep. This indicated that the specimens used to evaluate the SurePath method were in fact inferior to those used for ThinPrep, with the SurePath slides showing only a slight improvement in overall quality. Despite LBC preparations reducing pathologist screening and reporting time, both methods are more labour intensive and less cost effective. The majority of laboratories are not in the financial situation to trial new methods that require extra training and more staff hours, and as such this study has highlighted an important question ,Do the benefits of better quality preparation and diagnostic accuracy offset an increase in time and cost'. [source] Reliability of Computerized Emergency TriageACADEMIC EMERGENCY MEDICINE, Issue 3 2006Sandy L. Dong MD Objectives: Emergency department (ED) triage prioritizes patients based on urgency of care. This study compared agreement between two blinded, independent users of a Web-based triage tool (eTRIAGE) and examined the effects of ED crowding on triage reliability. Methods: Consecutive patients presenting to a large, urban, tertiary care ED were assessed by the duty triage nurse and an independent study nurse, both using eTRIAGE. Triage score distribution and agreement are reported. The study nurse collected data on ED activity, and agreement during different levels of ED crowding is reported. Two methods of interrater agreement were used: the linear-weighted , and quadratic-weighted ,. Results: A total of 575 patients were assessed over nine weeks, and complete data were available for 569 patients (99.0%). Agreement between the two nurses was moderate if using linear , (weighted ,= 0.52; 95% confidence interval = 0.46 to 0.57) and good if using quadratic , (weighted ,= 0.66; 95% confidence interval = 0.60 to 0.71). ED overcrowding data were available for 353 patients (62.0%). Agreement did not significantly differ with respect to periods of ambulance diversion, number of admitted inpatients occupying stretchers, number of patients in the waiting room, number of patients registered in two hours, or nurse perception of busyness. Conclusions: This study demonstrated different agreement depending on the method used to calculate interrater reliability. Using the standard methods, it found good agreement between two independent users of a computerized triage tool. The level of agreement was not affected by various measures of ED crowding. [source] Urine cytology in renal glomerular disease and value of G1 cell in the diagnosis of glomerular bleedingDIAGNOSTIC CYTOPATHOLOGY, Issue 2 2003Gia-Khanh Nguyen M.D. Abstract The objectives of the present study were to evaluate the cytology of urine sediments in patients with glomerular diseases, as well as the value of G1 dysmorphic erythrocytes (G1DE) or G1 cells in the detection of renal glomerular hematuria. Freshly voided urine samples from 174 patients with glomerular diseases were processed according to the method used for semiquantitative cytologic urinalysis. G1DEs (distorted erythrocytes with doughnut-like shape, target configuration with or without membranous protrusions or blebs), non-G1DEs (distorted erythrocytes without the above-mentioned morphologic changes), normal erythrocytes (NEs), and renal tubular cells (RTCs) were evaluated. Erythrocytic casts (ECs) were counted and graded as abundant (>1 per high-power field) or rare (1 per 5 high-power fields). G1DE/total erythrocyte ratios were calculated by counting 200 erythrocytes including G1DEs, non-G1DEs, and NEs. Only abundant NEs were found in 13 cases; abundant G1DEs, non-G1DEs, NEs, and no ECs in 95 cases; abundant NEs, non-G1DEs, and ECs and no G1DEs in 31 cases; and abundant NEs, G1DEs and non-G1DEs, and rare ECs in 35 cases. In 130 cases in which G1DEs were present, the G1DE/total erythrocyte ratios varied from 10% to 100%. This parameter was greater or equal to 80%, 50%, 20%, and 10% in 58 (44.6%), 29 (22.3%), 28 (21.5%), and 15 (11.5%) patients, respectively. In all cases, the number of RTCs was within normal limits or slightly increased, and a variable number of non-G1DEs were present in 161 cases. Thus, abundant ECs and/or G1DEs with a G1DE/total erythrocyte ratio of 10,100% proved to be specific urinary markers for renal glomerular diseases. Diagn. Cytopathol. 2003;29:67,73. © 2003 Wiley-Liss, Inc. [source] Distorted Froude-scaled flume analysis of large woody debrisEARTH SURFACE PROCESSES AND LANDFORMS, Issue 12 2001Nicholas P. Wallerstein Abstract This paper presents the results of a movable-boundary, distorted, Froude-scaled hydraulic model based on Abiaca Creek, a sand-bedded channel in northern Mississippi. The model was used to examine the geomorphic and hydraulic impact of simplified large woody debris (LWD) elements. The theory of physical scale models is discussed and the method used to construct the LWD test channel is developed. The channel model had bed and banks moulded from 0·8 mm sand, and flow conditions were just below the threshold of motion so that any sediment transport and channel adjustment were the result of the debris element. Dimensions and positions of LWD elements were determined using a debris jam classification model. Elements were attached to a dynamometer to measure element drag forces, and channel adjustment was determined through detailed topographic surveys. The fluid drag force on the elements decreased asymptotically over time as the channel boundary eroded around the elements due to locally increased boundary shear stress. Total time for geomorphic adjustment computed for the prototype channel at the Q2 discharge (discharge occurring once every two years on average) was as short as 45 hours. The size, depth and position of scour holes, bank erosion and bars created by flow acceleration past the elements were found to be related to element length and position within the channel cross-section. Morphologies created by each debris element in the model channel were comparable with similar jams observed in the prototype channel. Published in 2001 John Wiley & Sons, Ltd. [source] Comparison of displacement coefficient method and capacity spectrum method with experimental results of RC columnsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2004Yu-Yuan Lin Abstract For the performance-based seismic design of buildings, both the displacement coefficient method used by FEMA-273 and the capacity spectrum method adopted by ATC-40 are non-linear static procedures. The pushover curves of structures need to be established during processing of these two methods. They are applied to evaluation and rehabilitation of existing structures. This paper is concerned with experimental studies on the accuracy of both methods. Through carrying out the pseudo-dynamic tests, cyclic loading tests and pushover tests on three reinforced concrete (RC) columns, the maximum inelastic deformation demands (target displacements) determined by the coefficient method of FEMA-273 and the capacity spectrum method of ATC-40 are compared. In addition, a modified capacity spectrum method which is based on the use of inelastic design response spectra is also included in this study. It is shown from the test specimens that the coefficient method overestimates the peak test displacements with an average error of +28% while the capacity spectrum method underestimates them with an average error of -20%. If the Kowalsky hysteretic damping model is used in the capacity spectrum method instead of the original damping model, the average errors become -11% by ignoring the effect of stiffness degrading and -1.2% by slightly including the effect of stiffness degrading. Furthermore, if the Newmark,Hall inelastic design spectrum is implemented in the capacity spectrum method instead of the elastic design spectrum, the average error decreases to -6.6% which undervalues, but is close to, the experimental results. Copyright © 2003 John Wiley & Sons, Ltd. [source] Coefficient shifts in geographical ecology: an empirical evaluation of spatial and non-spatial regressionECOGRAPHY, Issue 2 2009L. Mauricio Bini A major focus of geographical ecology and macroecology is to understand the causes of spatially structured ecological patterns. However, achieving this understanding can be complicated when using multiple regression, because the relative importance of explanatory variables, as measured by regression coefficients, can shift depending on whether spatially explicit or non-spatial modeling is used. However, the extent to which coefficients may shift and why shifts occur are unclear. Here, we analyze the relationship between environmental predictors and the geographical distribution of species richness, body size, range size and abundance in 97 multi-factorial data sets. Our goal was to compare standardized partial regression coefficients of non-spatial ordinary least squares regressions (i.e. models fitted using ordinary least squares without taking autocorrelation into account; "OLS models" hereafter) and eight spatial methods to evaluate the frequency of coefficient shifts and identify characteristics of data that might predict when shifts are likely. We generated three metrics of coefficient shifts and eight characteristics of the data sets as predictors of shifts. Typical of ecological data, spatial autocorrelation in the residuals of OLS models was found in most data sets. The spatial models varied in the extent to which they minimized residual spatial autocorrelation. Patterns of coefficient shifts also varied among methods and datasets, although the magnitudes of shifts tended to be small in all cases. We were unable to identify strong predictors of shifts, including the levels of autocorrelation in either explanatory variables or model residuals. Thus, changes in coefficients between spatial and non-spatial methods depend on the method used and are largely idiosyncratic, making it difficult to predict when or why shifts occur. We conclude that the ecological importance of regression coefficients cannot be evaluated with confidence irrespective of whether spatially explicit modelling is used or not. Researchers may have little choice but to be more explicit about the uncertainty of models and more cautious in their interpretation. [source] Sample complexity reduction for two-dimensional electrophoresis using solution isoelectric focusing prefractionationELECTROPHORESIS, Issue 12 2008Matthew R. Richardson Abstract Despite its excellent resolving power, 2-DE is of limited use when analyzing cellular proteomes, especially in differential expression studies. Frequently, fewer than 2000 protein spots are detected on a single 2-D gel (a fraction of the total proteome) regardless of the gel platform, sample, or detection method used. This is due to the vast number of proteins expressed and their equally vast dynamic range. To exploit 2-DE unique ability as both an analytical and a preparative tool, the significant sample prefractionation is necessary. We have used solution isoelectric focusing (sIEF) via the ZOOM® IEF Fractionator (Invitrogen) to generate sample fractions from complex bacterial lysates, followed by parallel 2-DE, using narrow-range IPG strips that bracket the sIEF fractions. The net result of this process is a significant enrichment of the bacterial proteome resolved on multiple 2-D gels. After prefractionation, we detected 5525 spots, an approximate 3.5-fold increase over the 1577 spots detected in an unfractionated gel. We concluded that sIEF is an effective means of prefractionation to increase depth of field and improve the analysis of low-abundance proteins. [source] Maximum growth rates and possible life strategies of different bacterioplankton groups in relation to phosphorus availability in a freshwater reservoirENVIRONMENTAL MICROBIOLOGY, Issue 9 2006Karel, imek Summary We investigated net growth rates of distinct bacterioplankton groups and heterotrophic nanoflagellate (HNF) communities in relation to phosphorus availability by analysing eight in situ manipulation experiments, conducted between 1997 and 2003, in the canyon-shaped ,ķmov reservoir (Czech Republic). Water samples were size-fractionated and incubated in dialysis bags at the sampling site or transplanted into an area of the reservoir, which differed in phosphorus limitation (range of soluble reactive phosphorus concentrations , SRP, 0.7,96 µg l,1). Using five different rRNA-targeted oligonucleotide probes, net growth rates of the probe-defined bacterial groups and HNF assemblages were estimated and related to SRP using Monod kinetics, yielding growth rate constants specific for each bacterial group. We found highly significant differences among their maximum growth rates while insignificant differences were detected in the saturation constants. However, the latter constants represent only tentative estimates mainly due to insufficient sensitivity of the method used at low in situ SRP concentrations. Interestingly, in these same experiments HNF assemblages grew significantly faster than any bacterial group studied except for a small, but abundant cluster of Betaproteobacteria (targeted by the R-BT065 probe). Potential ecological implications of different growth capabilities for possible life strategies of different bacterial phylogenetic lineages are discussed. [source] A cautionary note on the use of species presence and absence data in deriving sediment criteriaENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2002Katherine Von Stackelberg Abstract In recent years, a variety of approaches to deriving sediment quality guidelines have been developed. One approach relies on establishing an empirical relationship between the concentration of a contaminant in sediment and the condition of some biological indicator, for example, combining measured sediment concentrations of contaminants combined with data on colocated benthic species to measure in situ community effects of contamination. Biological threshold concentrations derived in this manner are being considered or have already been adopted by some regulatory agencies as a means for deriving sediment guidelines (e.g., Canada's Provincial Sediment Quality Guidelines). In order to test the validity of this method, we constructed several Monte Carlo simulations to illustrate that the methodology used to develop these guidelines is flawed by the effects of sampling and statistical artifacts that emerge from undersampling a lognormal density function. As a case study, this paper will present the screening level concentration method used by the Ontario Ministry of the Environment (Toronto, ON, Canada) and provide the results of several probabilistic exercises highlighting these issues. We present a word of caution on the applicability of methods that rely exclusively on statistical and mathematical relationships between invertebrate data and sediment concentrations to derive sediment quality guidelines. [source] Dipole Tracing Examination for the Electric Source of Photoparoxysmal Response Provoked by Half-Field StimulationEPILEPSIA, Issue 2000Kazuhiko Kobayashi Purpose: Dipole tracing (DT) is a computer-aidcd noninvasive method used to estimate the location of epileptic discharges from the scalp EEG. In DT equivalent current dipoles (ECDs), which rcflcct the electric source in the brain, are rcsponsible for the potential distribution on the scalp EEC. Thercfore, the DT method is useful to estimatc the focal paroxysmal discharges. In this study we examined the location of the clectric source of photoparoxysmal response (PPR) using scalpskull-brain dipolc tracing (SSB-DT) after hal[-field stimulation, which produced focal PPR on the scalp EEG. Methods: We studied 4 cases of photoscnsitive epilepsy. Wc performed 20 Hz red flicker and flickcring dot pattern half-ficld stimulation to provoke PPR. In this method, the loci of gcnerators corresponding to the paroxysmal discharges were estimated as ECDs by I - and 2-dipole analyses. Each location of the ECDs was estimated by iterative calculation. Algorithms minimizing the squarcd difference betwccn the electrical potentials recorded from the scalp EEG and those calculated theoretically from the voluntary dipoles were uscd. In the SSB model, the scalp shell was reconstructed from the helmet mcasurements, and the shapc of the skull and brain was 3-dimcnsionally reconstructed from CT images. A dipolarity larger than 98% w the accuracy of the estimation. We recorded thcir 2 I channel monopolar scalp EEG. Each spike was sampled analyzed at 10 points around the peaks of at least 10 spikes in each patient using the SSB-DT method. The ECDs were then supcrimposed on thc MRI of each palient to idcntify the more cxact anatomical region. Results: This study showed the location of cach focus and a dipolarity of greater than 98% in all cases, although the results from the 2-dipole method showed scattered location. We considered that the analyzed signals were generated from single source. PPR was elicitcd cross-lateral to the field stimulated. By red flicker half-field stimulation, EEG revealed eithcr focal spikes and waves in the contralatcral occipital, temporo-occipitel region, or diffuse spikes and wave complex bursts, sccn dominantly at the contralateral hcmisphere. The supcrimposed ESDs on MRI were located at the occipital or inferior temporal lobe. PPR, provoked by flickering dot pattern half-field stimulation, werc focal spikes and waves, mainly in the occipital, parieto-occipital region, or diffuse spikes and wave complcx bursts, seen dominantly at thc contralateral hcmiaphere. The ECDs of their PPRs were located in the occipital, inferior temporal, or inferior pirietal lobules on MRI. Conclusion: Our findings suggest that the inferior temporal and inferior parictal lobules which are important for the processing sequence of the visual system in addition to the occipital lobc, might he responsible for thc mechanism of PPR by half-ficld stimulation, espccially for electric source expansion. [source] What changes in health-related quality of life matter to multiple myeloma patients?EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 4 2010A prospective study Abstract Objective: To determine the clinical significance of changes in quality-of-life scores in patients with multiple myeloma (MM), we have estimated the minimal important difference (MID) for the health-related quality-of-life instrument, the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30. The MID is the smallest change in a quality-of-life score considered important to patients. Methods: Between 2006 and 2008, 239 patients with MM completed the EORTC QLQ-C30 at inclusion (T1) and after 3 months (T2). At T2, a structured quality-of-life interview was also performed. MIDs were calculated by using mean score changes (T2,T1) for patients who in the interview stated they had improved, deteriorated or were unchanged. MIDs were also estimated by the receiver-operating characteristic (ROC) curve method as well as by calculation effect sizes using standard deviations of baseline scores. Results: MIDs varied slightly depending on the method used. Patients stating in the interview that they had ,improved' or ,deteriorated' had a corresponding change in EORTC QLQ-C30 score ranging from 6 to15 (improvement) and from 9 to17 (deterioration) (scale range 0,100). The ROC analysis indicated that changes in score from 7 to17 represent clinically important changes to patients. The effect size method suggested 5,6 to be a small and 11,15 to be a medium change. Conclusion: Calculation of MIDs as mean score changes or by ROC analysis suggested that a change in the EORTC QLQ-C30 score in the range of approximately 6,17 is considered important by patients with MM. These MIDs are closer to a medium effect size than to a small effect size. Our findings imply that mean score changes smaller than 6 are unlikely to be important to the patients, even if these changes are statistically significant. [source] A comparison of different models of stroke on behaviour and brain morphologyEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 7 2003C.L.R. Gonzalez Abstract We compared the effects of three models of permanent ischemia, as well as cortical aspiration, on behaviour and brain morphology. Rats received a stroke either by devascularization or by two different procedures of medial cerebral artery occlusion (MCAO; small vs. large). Animals were trained in a reaching task, forepaw asymmetry, forepaw inhibition, sunflower seed task and tongue extension. Behaviour was assessed 1 week after the lesion and at 2-week intervals for a total of 9 weeks. One week after the surgery all animals were severely impaired on all tasks and although they improved over time they only reached preoperative base lines on tongue extension. Animals with small MCAOs performed better in reaching and sunflower tasks; no other behavioural differences were detected among the groups. Pyramidal cells in forelimb and cingulate areas as well as spiny neurons of the striatum were examined for dendritic branching and spine density using a Golgi,Cox procedure. Each lesion type had a different impact on cell morphology. Overall, different changes (atrophy or hypertrophy) were observed with each kind of lesion and these changes were specific for the region (forelimb, cingulate, striatum) and the condition (intact vs. damaged hemisphere). These results suggest that: (i) different lesions to the motor cortex produce subtle differences in behaviour, and (ii) the method used to induce the lesion produces striking differences in cortical and subcortical plasticity. [source] Are UV-induced nonculturable Escherichia coli K-12 cells alive or dead?FEBS JOURNAL, Issue 12 2003Andrea Villarino Cells that have lost the ability to grow in culture could be defined operationally as either alive or dead depending on the method used to determine cell viability. As a consequence, the interpretation of the state of ,nonculturable' cells is often ambiguous. Escherichia coli K12 cells inactivated by UV-irradiation with a low (UV1) and a high (UV2) dose were used as a model of nonculturable cells. Cells inactivated by the UV1 dose lost ,culturability' but they were not lysed and maintained the capacity to respond to nutrient addition by protein synthesis and cell wall synthesis. The cells also retained both a high level of glucose transport and the capacity for metabolizing glucose. Moreover, during glucose incorporation, UV1-treated cells showed the capacity to respond to aeration conditions modifying their metabolic flux through the Embden,Meyerhof and pentose-phosphate pathways. However, nonculturable cells obtained by irradiation with the high UV2 dose showed several levels of metabolic imbalance and retained only residual metabolic activities. Nonculturable cells obtained by irradiation with UV1 and UV2 doses were diagnosed as active and inactive (dying) cells, respectively. [source] Application of Electrochemical Impedance Spectroscopy for Fuel Cell Characterization: PEFC and Oxygen Reduction Reaction in Alkaline Solution,FUEL CELLS, Issue 3 2009N. Wagner Abstract The most common method used to characterise the electrochemical performance of fuel cells is the recording of current/voltage U(i) curves. Separation of electrochemical and ohmic contributions to the U(i) characteristics requires additional experimental techniques like electrochemical impedance spectroscopy (EIS). The application of EIS is an approach to determine parameters which have proved to be indispensable for the characterisation and development of all types of fuel cell electrodes and electrolyte electrode assemblies [1]. In addition to EIS semi-empirical approaches based on simplified mathematical models can be used to fit experimental U(i) curves [2]. By varying the operating conditions of the fuel cell and by the simulation of the measured EIS with an appropriate equivalent circuit, it is possible to split the cell impedance into electrode impedances and electrolyte resistance. Integration in the current density domain of the individual impedance elements enables the calculation of the individual overpotentials in the fuel cell (PEFC) and the assignment of voltage loss to the different processes. In case of using a three electrode cell configuration with a reference electrode, one can directly determine the corresponding overvoltage. For the evaluation of the measured impedance spectra the porous electrode model of Göhr [3] was used. This porous electrode model includes different impedance contributions like impedance of the interface porous layer/pore, interface porous layer/electrolyte, interface porous layer/bulk, impedance of the porous layer and impedance of the pores filled by electrolyte. [source] Improvement of the comprehension of written information given to healthy volunteers in biomedical research: a single-blind randomized controlled studyFUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 2 2007Adeline Paris Abstract Writing an informed consent form (ICF) for biomedical research is a difficult task. We conducted a multicenter single-blind randomized controlled trial to identify whether a working group or the systematic improvement in lexico-syntactic readability or an association of the two could increase the comprehension of the written information given to healthy volunteers enrolled in biomedical research. Participants were randomized to read one of four versions of the ICF: unchanged ICF (A), ICF with systematic lexico-syntactic readability improvement (B), ICF modified by a working group (C), and ICF modified by the working group followed by systematic lexico-syntactic improvement (D). The primary end-point was the objective comprehension score at day 0 for each study group. The scores of objective comprehension at day 0 were statistically different between the four study groups (anovaP = 0.020). The pairwise analysis showed an improvement in the working group vs. the unchanged group (P = 0.003), and a tendency to improvement in the group who read the ICF modified using lexico-syntactic readability and in the group who read the ICF modified using the two methods (P = 0.020 and 0.027 respectively). We conducted a two-way anova to identify some characteristics of the population which could explain this score. There was a significant interaction between the type of informed consent document (ICD) and the gender. Improving the ICD in phase I biomedical research leads to better comprehension, whether the method used is systematic lexico-syntactic improvement or a review by a working group. The improvement is specifically observed in men compared with women. Conversely, while both methods diverge in their effect on lexico-syntactic readability, their association is not mandatory. We suggest that in all phase I clinical trials, the ICF be improved by either method. [source] On the Interaction of Risk and Time Preferences: An Experimental StudyGERMAN ECONOMIC REVIEW, Issue 3 2001Vital Anderhub Experimental studies of risk and time preference typically focus on one of the two phenomena. The goal of this paper is to investigate the (possible) correlation between subjects' attitude to risk and their time preference. For this sake we ask 61 subjects to price a simple lottery in three different scenarios. At the first, the lottery premium is paid ,now'. At the second, it is paid ,later'. At the third, it is paid ,even later,. By comparing the certainty equivalents offered by the subjects for the three lotteries, we test how time and risk preferences are interrelated. Since the time interval between ,now' and ,later' is the same as between ,later' and ,even later', we also test the hypothesis of hyperbolic discounting. The main result is a statistically significant negative correlation between subjects' degrees of risk aversion and their (implicit) discount factors. Moreover, we show that the negative correlation is independent of the method used to elicit certainty equivalents (willingness to pay versus willingness to accept). [source] Analysis of Transient Data from Infiltrometer Tests in Fine-Grained SoilsGROUND WATER, Issue 3 2000Dominique Guyonnet Data collected during ring infiltrometer tests are often analyzed while assuming either that the effect of gravity is negligible (early-time, transient data) or that it is dominant (late-time, steady-state data). In this paper, an equation is proposed for inter-preting both early-time and late-time data measured during infiltration tests under falling head conditions. It is shown that the method used by previous authors for interpreting both early-time data is a special case of the proposed equation. The equation is applied to data collected during tests performed in fine-grained soils, and results are discussed. The analysis suggests that to assume a priori values of the soil sorptive number, as indicated in the literature for various soils, may in some cases lead to severely underes-timated values of the saturated hydraulic conductivity. Conversely, in low permeability soils, to assume steady-state gravity drainage may lead to order of magnitude overestimates of the saturated hydraulic conductivity. A dimensionless analysis provides characteristic times that correspond either to the duration of the log-log half slope displayed by early-time data or to the log-log unit slope characteristic of late-time data. [source] Measuring inequality in self-reported health,discussion of a recently suggested approach using Finnish dataHEALTH ECONOMICS, Issue 7 2004Jorgen Lauridsen Health surveys often include a general question on self-assessed health (SAH), usually measured on an ordinal scale with three to five response categories, from ,very poor' or ,poor' to ,very good' or ,excellent'. This paper assesses the scaling of responses on the SAH question. It compares alternative procedures designed to impose cardinality on the ordinal responses. These include OLS, ordered probit and interval regression approaches. The cardinal measures of health are used to compute and decompose concentration indices for income-related inequality in health. Results are provided using Finnish data on 15D and the SAH questions. Further evidence emerges for the internal validity of a method used in a pioneering study by van Doorslaer and Jones which was based on Canadian data on the McMaster Health Utility Index Mark III (HUI) and SAH. The study validates the conclusions drawn by van Doorslaer and Jones. It confirms that the interval regression approach is superior to OLS and ordered probit regression in assessing health inequality. However, regarding the choice of scaling instrument, it is concluded that the scaling of SAH categories and, consequently, the measured degree of inequality, are sensitive to characteristics of the chosen scaling instrument. Copyright © 2003 John Wiley & Sons, Ltd. [source] Relative accuracy and predictive ability of direct valuation methods, price to aggregate earnings method and a hybrid approachACCOUNTING & FINANCE, Issue 4 2006Lucie Courteau M41 Abstract In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990,2000) period. Results from both pricing-error and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other. [source] Enhanced Expression of Transcription Factor E2F in Helicobacter pylori -infected Gastric MucosaHELICOBACTER, Issue 3 2002Hajime Isomoto Abstract Objective.Helicobacter pylori is implicated in gastric carcinogenesis through increased gastric epithelial cell turnover. In fact, high proportions of proliferating and apoptotic epithelial cells are found in H. pylori -infected gastric mucosa. E2F, a transcription factor, induces coordinated transactivation of a set of genes involved in cell cycle progression. The aim of this study was to investigate the expression of E2F in H. pylori -infected gastric mucosa and examine the correlation between such expression and gastric epithelial cell proliferation and apoptosis. Methods. Twenty-five patients with H. pylori -associated gastritis (HAG) and 13 control subjects negative for H. pylori were examined. E2F expression was studied in situ by Southwestern histochemistry, a method used to localize transcription factors. Labeled double-stranded oligo-DNA with specific consensus sequence for E2F binding sites was reacted with frozen sections from antral biopsy specimens obtained at endoscopy. Gastric epithelial cell proliferation was assessed by immunostaining of proliferating cell nuclear antigen (PCNA), while apoptosis was detected by terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling (TUNEL). The percentages of epithelial cells with nuclear staining for PCNA and E2F were expressed as a positivity index (PI). The percentage of TUNEL-positive epithelial cells was defined as apoptotic index. Results. E2F was expressed in the nuclei of gastric epithelial cells within gastric pits. E2F PI in H. pylori -infected gastric mucosa was significantly higher than that in noninfected. Expression of E2F correlated well with PCNA-positive epithelial cells. We also demonstrated colocalization of PCNA with E2F expression in the same epithelial cells. Apoptotic index was also high in H. pylori -infected mucosa, and correlated with E2F PI. Conclusion. Our results demonstrated a significant increase in the expression of E2F in H. pylori -infected mucosa, which correlated with both the percentages of PCNA- and TUNEL-positive cells. Our results suggest that enhanced E2F expression in gastric mucosa may be involved in H. pylori -related gastric carcinogenesis through accelerated cell turnover. [source] Variability in the upper limit of normal for serum alanine aminotransferase levels: A statewide study,HEPATOLOGY, Issue 6 2009Anand Dutta We conducted a study to characterize the variability in the upper limit of normal (ULN) for alanine aminotransferase (ALT) across different laboratories (labs) in Indiana and to understand factors leading to such variability. A survey was mailed to all eligible labs (n = 108) in Indiana, and the response rate was 62%. The survey queried for ALT ULN, the type of chemical analyzer used, five College of American Pathologists (CAP) sample results, and methods used to establish the reference interval. There was a wide variability in the ALT ULN for both men and women. Eighty-five percent of labs used chemical analyzers belonging to one of the four brands. For all five CAP samples, there was a statistically significant difference in ALT values measured by different analyzers (P < 0.0001), but these differences were not clinically significant. The majority of labs used the manufacturers' recommendations for establishing their ALT ULN rather than in-house healthy volunteer testing (only 17%). When healthy volunteers were tested, the process for testing was haphazard in terms of the number of individuals tested, frequency of testing, and criteria for choosing the reference population. After controlling for chemical analyzer type, there was no significant relationship between ALT ULN values and the method used for its establishment. Conclusion: Wide variability in ALT ULN across different labs is more likely due to variable reference intervals of different chemical analyzers. It may be possible to minimize variability in ALT ULN by (1) each lab solely following the manufacturers' recommendations and (2) manufacturers of different analyzers following consistent and rigorous methodology in establishing the reference range. Alternatively, studies should be undertaken to identify outcome-based reference intervals for ALT. (HEPATOLOGY 2009.) [source] Direct and indirect methods to simulate the actual evapotranspiration of an irrigated overhead table grape vineyard under Mediterranean conditionsHYDROLOGICAL PROCESSES, Issue 2 2008Gianfranco Rana Abstract Two methods, indirect and direct, for simulating the actual evapotranspiration (E) were applied to an irrigated overhead table grape vineyard during summer, situated in the Mediterranean region (south Italy), over two successive years. The first method, indirect but more practical, uses the crop coefficient (Kc) approach and requires determination of the reference evapotranspiration E0 (FAO (Food and Agriculture Organization) method). This method underestimated on average by 17% the daily values of the actual evapotranspiration E. The analysis in this paper shows that the values of Kc for the table grapes determined by the FAO method seem to not be valid in our experimental conditions. Similar conclusions can be found in the literature for the table grape cultivated under different experimental conditions and using different training systems. The second method, is a direct method for estimating the evapotranspiration. It requires development of a model for the overhead table grape vineyard E, following the Penman,Monteith one-step approach, and using standard meteorological variables as inputs for the determination of the canopy resistance. This method, which needs a particularly simple calibration, provided a better simulation of the hourly and daily evapotranspiration than the indirect method. In additon, the standard error of the daily values for the direct method ( ± 0 · 41 mm) was about 50% lower than that obtained for the indirect method, also when the indirect method used a locally calibrated coefficient Kc instead of a generic Kc. Both, for practical application and theoretical issues, the advantages and disadvantages linked to the use of each tested method are discussed in detail. Copyright © 2007 John Wiley & Sons, Ltd. [source] Poor nutritional condition as a consequence of high dominance status in the Coal Tit Parus aterIBIS, Issue 1 2004Jacqueline M. Hay The costs and benefits of dominance status have been investigated in the past and it has generally been reported that subdominant birds are at a nutritional disadvantage owing to their low dominance status. The nutritional condition of birds during winter can be important in determining their likelihood of survival. This is particularly so in small passerines that are sensitive to severe weather conditions. Ptilochronology is an accurate method used to produce a long-term estimate of body condition spanning the number of days that it takes to grow a new feather. Ptilochronology was used during this study to estimate the nutritional condition of Coal Tits Parus ater during one winter and how condition was affected by dominance status. Dominant Coal Tits produced poorer quality feathers, which they grew at a slower rate, than did subdominant conspecifics. This study highlights a nutritional cost to high dominance status that could have long-term consequences because the induced tail feathers will not be replaced for at least 5 months. [source] Autoimmune disease concomitance among inflammatory bowel disease patients in the United States, 2001-2002INFLAMMATORY BOWEL DISEASES, Issue 6 2008Russell Cohen MD Abstract Background: Recent studies suggest that inflammatory bowel disease (IBD) may share an underlying pathogenesis with other autoimmune diseases. Methods: Two United States data sets with patient-level medical and drug claims were used to explore the occurrence of autoimmune diseases in patients with IBD, particularly Crohn's disease (CD) and ulcerative colitis (UC), with that in controls. From 2001 to 2002 IBD patients were identified using International Classification of Diseases, 9th revision, diagnosis codes in the IMS Health Integrated Administration Claims Database and the Market Scan Commercial Claims and Encounters Database. Controls were selected by matching on sex, age, Census Bureau region, and length of previous medical insurance coverage. Odds ratios (ORs) evaluated the risk relationship between IBD patients and controls within an estimated Mantel-Haenszel 95% confidence interval. Sensitivity analysis tested the case identification method used to select IBD patients. Results: The risk for ankylosing spondylitis (AS) was substantially increased across both data sets: OR (95% confidence interval [CI]) of 7.8 (5.6,10.8) in IMS Health and 5.8 (3.9,8.6) in MarketScan. The risk for rheumatoid arthritis (RA) was 2.7 (2.4,3.0) and 2.1 (1.8,2.3), respectively; for multiple sclerosis (MS); the ORs were 1.5 (1.2,1.9) and 1.6 (1.2,2.1), respectively. There was no increased risk for type 1 diabetes mellitus, and the results for psoriatic arthritis (PsA) were inconsistent. The sensitivity analysis supported these findings. Conclusions: A much higher risk for RA, AS, PsA, and MS was observed in IBD patients compared with controls. Prospective epidemiologic studies are needed to confirm these findings and explore the pathogenic mechanism of this relationship. (Inflamm Bowel Dis 2008) [source] A variational multiscale model for the advection,diffusion,reaction equationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 7 2009Guillaume Houzeaux Abstract The variational multiscale (VMS) method sets a general framework for stabilization methods. By splitting the exact solution into coarse (grid) and fine (subgrid) scales, one can obtain a system of two equations for these unknowns. The grid scale equation is solved using the Galerkin method and contains an additional term involving the subgrid scale. At this stage, several options are usually considered to deal with the subgrid scale equation: this includes the choice of the space where the subgrid scale would be defined as well as the simplifications leading to compute the subgrid scale analytically or numerically. The present study proposes to develop a two-scale variational method for the advection,diffusion,reaction equation. On the one hand, a family of weak forms are obtained by integrating by parts a fraction of the advection term. On the other hand, the solution of the subgrid scale equation is found using the following. First, a two-scale variational method is applied to the one-dimensional problem. Then, a series of approximations are assumed to solve the subgrid space equation analytically. This allows to devise expressions for the ,stabilization parameter' ,, in the context of VMS (two-scale) method. The proposed method is equivalent to the traditional Green's method used in the literature to solve residual-free bubbles, although it offers another point of view, as the strong form of the subgrid scale equation is solved explicitly. In addition, the authors apply the methodology to high-order elements, namely quadratic and cubic elements. The proposed model consists in assuming that the subgrid scale vanishes also on interior nodes of the element and applying the strategy used for linear element in the segment between these interior nodes. The proposed scheme is compared with existing ones through the solution of a one-dimensional numerical example for linear, quadratic and cubic elements. In addition, the mesh convergence is checked for high-order elements through the solution of an exact solution in two dimensions. Copyright © 2008 John Wiley & Sons, Ltd. [source] Solving limit analysis problems: an interior-point methodINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2005F. Pastor Abstract This paper exposes an interior-point method used to solve convex programming problems raised by limit analysis in mechanics. First we explain the main features of this method, describing in particular its typical iteration. Secondly, we show and study the results of its application to a concrete limit analysis problem, for a large range of sizes, and we compare them for validation with existing results and with those of linearized versions of the problem. As one of the objectives of the work, another classical problem is analysed for a Gurson material, to which linearization or conic programming does not apply. Copyright © 2005 John Wiley & Sons, Ltd. [source] |