Standard Methods (standard + methods)

Distribution by Scientific Domains
Distribution within Medical Sciences


Selected Abstracts


Project selection based on intellectual capital scorecards

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1 2005
Hennie Daniels
In this paper we present a tool for the selection of a project portfolio in knowledge intensive organizations. Standard methods mostly focus on project selection on the basis of expected returns. In many cases other strategic factors are important such as customer satisfaction, innovation capacity, and development of best practices. These factors should be considered in their interdependence during the process of project selection. Here the point of departure is the intellectual capital scorecard in which the indicators are periodically measured against a target. The scores constitute the input of a programming model. From the optimal portfolio computed, clear objectives for management can be derived. The method is illustrated in an industrial case study. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Smoothing Observational Data: A Philosophy and Implementation for the Health Sciences

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2006
Sander Greenland
Summary Standard statistical methods (such as regression analysis) presume the data are generated by an identifiable random process, and attempt to model that process in a parsimonious fashion. In contrast, observational data in the health sciences are generated by complex, nonidentified, and largely nonrandom mechanisms, and are analyzed to form inferences on latent structures. Despite this gap between the methods and reality, most observational data analysis comprises application of standard methods, followed by narrative discussion of the problems of entailed by doing so. Alternative approaches employ latent-structure models that include components for nonidentified mechanisms. Standard methods can still be useful, however, provided their modeling philosophy is modified to encourage preservation of structure, rather than achieving parsimonious description. With this modification they can be viewed as smoothing or filtering methods for separating noise from signal before the task of latent-structure modeling begins. I here give a detailed justification of this view, and a hierarchical-modeling implementation that can be carried out with popular software. Concepts are illustrated in the smoothing of a contingency table from an analysis of magnetic fields and childhood leukemia. Résumé Les méthodes statistiques standard (telles que l'analyse par régression) supposent que les données sont générées par un processus aléatoire identifiable, et tentent de modéliser ce processus d'une façon parcimonieuse. Par contraste, les données d'observation dans les sciences de la santé sont générées par des mécanismes complexes, non identifiés et largement non aléatoires, et elles sont analysées pour tirer des conclusions sur les structures latentes. En dépit de cet écart entre les méthodes et la réalité, l'analyse de la plupart des données d'observation comprend l'application des méthodes standard, suivies par la discussion narrative des problèmes engendrés par ces procédés. Les approches alternatives emploient des modèles à structure latente qui incluent des composantes pour les mécanismes non identifiés. Cependant, les méthodes standard peuvent encor être utiles, à condition que la conception de leur modélisation soit modifiée pour encourager la préservation de la structure, plutôt que d'arriver à une description parcimonieuse. Avec cette modification elles peuvent être considérées comme des méthodes de lissage ou de filtrage pour séparer le bruit du signal avant que commence le travail de modélisation à structure latente. Je donne ici une justification detaillée de cette opinion et je présente une mise en oeuvre de modélisation hiérachique qui peut être réalisée avec un logiciel courant. Les concepts sont illustrés dans le lissage d'un tableau de contingence à partir d'une analyse de champs magnétiques et de leucémie enfantine. [source]


Screening tools for depressed mood after childbirth in UK-based South Asian women: a systematic review

JOURNAL OF ADVANCED NURSING, Issue 6 2007
Soo M. Downe
Abstract Aim., This paper is a report of a systematic review to answer the question: what is the relevance, acceptability, validity and effectiveness of tools designed to screen for postnatal depressed mood for South Asian women living in the UK? Background., Standard methods to screen women for postnatal depressed mood were developed with Caucasian populations. This study reviews postnatal screening tools adapted or developed for United Kingdom-based South Asian women. Method., A structured systematic review of English language studies initially was completed between 1980 and May 2003, and later updated to January 2005. The review was based on an a priori search strategy with inclusion and exclusion criteria and analysis included a quality assessment tool. Findings were tabulated against criteria for acceptability and effectiveness of diagnostic tools. Results., Seven papers were included in the review. None addressed all preset quality criteria. Four papers among them reported on translations of two existing tools (Edinburgh Postnatal Depression Scale and General Household Questionnaire). Two new tools were reported between the remaining three papers (Punjabi Postnatal Depression Scale and ,Doop Chaon'©). Doop Chaon is a visual tool. The other tools used either Bengali or Punjabi, based on written scales. The General Household Questionnaire did not appear to be appropriate for this population. None of the studies were rigorous enough to demonstrate generalizable sensitivity or specificity. Qualitative data indicated that women preferred face-to-face interviews to self-complete questionnaires. Conclusions., None of the tools are currently sufficiently evaluated for clinical practice. Questions are raised specifically about use of language-based tools to measure postnatal depressed mood in this population and about the extent to which focused interviews could be used as an alternative for specific sub-sections of population groups. [source]


Variable selection in random calibration of near-infrared instruments: ridge regression and partial least squares regression settings

JOURNAL OF CHEMOMETRICS, Issue 3 2003
Arief Gusnanto
Abstract Standard methods for calibration of near-infrared instruments, such as partial least-squares (PLS) and ridge regression (RR), typically use the full set of wavelengths in the model. In this paper we investigate the effect of variable (wavelength) selection for these two methods on the model prediction. For RR the selection is optimized with respect to the ridge parameter, the number of variables and the configuration of the variables in the model. A fast iterative computational algorithm is developed for the purpose of this optimization. For PLS the selection is optimized with respect to the number of components, the number of variables and the configuration of the variables. We use three real data sets in this study: processed milk from the market, milk from a dairy farm and milk from the production line of a milk processing factory. The quantity of interest is the concentration of fat in the milk. The observations are randomly split into estimation and validation sets. Optimization is based on the mean square prediction error computed on the validation set. The results indicate that the wavelength selection will not always give better prediction than using all of the available wavelengths. Investigation of the information in the spectra is necessary to determine whether all of them are relevant to the objective of the model. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Statistical and methodological issues in the analysis of complex sample survey data: Practical guidance for trauma researchers,

JOURNAL OF TRAUMATIC STRESS, Issue 5 2008
Brady T. West
Standard methods for the analysis of survey data assume that the data arise from a simple random sample of the target population. In practice, analysts of survey data sets collected from nationally representative probability samples often pay little attention to important properties of the survey data. Standard statistical software procedures do not allow analysts to take these properties of survey data into account. A failure to use more specialized procedures designed for survey data analysis can impact both simple descriptive statistics and estimation of parameters in multivariate models. In this article, the author provides trauma researchers with a practical introduction to specialized methods that have been developed for the analysis of complex sample survey data. [source]


The Arabic ICIQ-UI SF: An alternative language version of the English ICIQ-UI SF

NEUROUROLOGY AND URODYNAMICS, Issue 3 2006
H. Hashim
Abstract Aims Urinary incontinence (UI) is a common and distressing condition. A variety of questionnaires are currently available to assess UI and its impact on patients' lives. However, most have not been adapted for international use. Following a systematic review of the literature and existing questionnaires the International Consultation on Incontinence short form questionnaire (ICIQ-UI SF) was developed, and has since been translated into many languages for local use. This paper reports the development and validation of the first UI questionnaire in the Arabic language. The development of this questionnaire will facilitate the assessment of UI in both clinical practice and research in the Middle-East. Methods Translation and validation of the Arabic version of the ICIQ-UI is described. Standard methods of translation by native Arabic and English speakers (including translation and back translation) are followed. The psychometric properties of the questionnaire, including its validity, reliability and sensitivity to change, are examined. The validation of the questionnaire involved patients attending urology outpatient clinics in two Middle-Eastern countries. Results The Arabic ICIQ-UI SF was found to be valid, reliable and responsive, indicating that the psychometric properties of the questionnaire have remained constant throughout the adaptation process. Furthermore, the findings of the psychometric testing confirm those found for the UK-English ICIQ-UI SF. Conclusions The development of this questionnaire will allow the study of Arabic speaking groups with UI in many countries around the world. This may act as an example to initiate the translation and validation of other patient reported outcomes into the Arabic language, thereby enabling more multinational and cross-cultural research into diseases in given areas. Neurourol. Urodynam. © 2006 Wiley-Liss, Inc. [source]


004 Validation of in vivo and in vitro methods to measure UVA protectiveness of sunscreen

PHOTODERMATOLOGY, PHOTOIMMUNOLOGY & PHOTOMEDICINE, Issue 2 2002
C. Cole
Standard methods for measuring the sunburning protection of sunscreens (SPF) are globally established. In vivo methods of determining UVA protectiveness of sunscreens have been reduced to either a Persistent Pigment Darkening (PPD) or Protection Factor A (PFA-either persistent pigment darkening or erythema endpoints) test protocols. Both of these techniques require human exposure to UVA radiation that can be time consuming and do not benefit the human subject. Validated methodologies that would minimize the UVA exposure, or could be performed in vitro would simplify the determination of UVA protectiveness and assist product optimization. Diffuse reflectance spectroscopy of sunscreens on human skin was utilized to evaluate a series of seven model sunscreen systems that were previously evaluated in vivo by both PPD and PFA testing. Correlation of the values found with this technique correlated highly with the in vivo test results, with 1:1 correspondence of protection values. Separately, an in vitro test model was assessed on these same model sunscreens. Sunscreen was applied to roughened surface quartz plates, and the absorbance of the sunscreens was measured before and after UV exposure. The absorbance was mathematically forced to fit the in vivo SPF value and the UVA protectiveness was calculated using both erythema and pigment darkening action spectra. The in vitro predictions of UVA was highly correlated with the in vivo PPD and PFA values. It was determined that preirradiation of the sunscreen samples is needed to accurately predict the protection provided by sunscreens that are not photostable. Both of these techniques provide new ways to accurately predict sunscreen UVA protectiveness. [source]


Artificial intelligence advancements applied in off-the-shelf controllers

PROCESS SAFETY PROGRESS, Issue 2 2002
Edward M. Marszal P.E.
Since the earliest process units were built, CPI engineers have employed artificial intelligence to prevent losses. The expanding use of computer-based systems for process control has allowed the amount of intelligence applied in these expert systems to drastically increase. Standard methods for performing Expert System tasks are being formalized by numerous researchers in industry and academia. Work products from these groups include designs that present process hazards knowledge in a structured, hierarchical, and modular manner. Advancements in programmable logic controller (PLC) technology have created systems with substantial computing power that are robust and fault tolerant enough to be used in safety critical applications. In addition, IEC 1131-3 standardized the programming languages available in virtually every new controller. The function block language defined in IEC 1131-3 is particularly well suited to performing modular tasks, which makes it an ideal platform for representing knowledge. This paper begins by describing some of the advancements in knowledge-based systems for loss prevention applications. It then explores how standard IEC 1131-3 programming techniques can be used to build function blocks that represent knowledge of the hazards posed by equipment items. The paper goes on to develop a sample function block that represents the hazards of a pressure vessel, using knowledge developed in the API 14-C standard. [source]


Flow cytometry-assisted purification and proteomic analysis of the corticotropes dense-core secretory granules

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 18 2008
Daniel J. Gauthier
Abstract The field of organellar proteomics has emerged as an attempt to minimize the complexity of the proteomics data obtained from whole cell and tissue extracts while maximizing the resolution on the protein composition of a single subcellular compartment. Standard methods involve lengthy density-based gradient and/or immunoaffinity purification steps followed by extraction, 1-DE or 2-DE, gel staining, in-gel tryptic digestion, and protein identification by MS. In this paper, we present an alternate approach to purify subcellular organelles containing a fluorescent reporter molecule. The gel-free procedure involves fluorescence-assisted sorting of the secretory granules followed by gentle extraction in a buffer compatible with tryptic digestion and MS. Once the subcellular organelle labeled, this procedure can be done in a single day, requires no major modification to any instrumentation and can be readily adapted to the study of other organelles. When applied to corticotrope secretory granules, it led to a much enriched granular fraction from which numerous proteins could be identified through MS. [source]


POST-CARTEL PRICING DURING LITIGATION

THE JOURNAL OF INDUSTRIAL ECONOMICS, Issue 4 2004
Joseph E. Harrington Jr.
Standard methods in the U.S. for calculating antitrust damages in price-fixing cases are shown to create a strategic incentive for firms to price above the non-collusive price after the cartel has been dissolved. This results in an overestimate of the but for price and an underestimate of the level of damages. The extent of this upward bias in the but for price is greater, the longer the cartel was in place and the more concentrated the industry. [source]


Advanced tilt correction from flow distortion effects on turbulent CO2 fluxes in complex environments using large eddy simulation

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 643 2009
F. Griessbaum
Abstract Measurement of the turbulent fluxes of gases, momentum and heat can be biased by obstacles such as buildings or instrument platforms distorting the flow of air to the flux instruments. Standard methods have long been used to account for non-horizontal mean flows. Here we demonstrate a novel approach to correct for the effects of flow distortion which combines numerical flow modelling with eddy covariance measurements of the fluxes. This approach applies a flow distortion correction to the data prior to the application of the standard planar-fit and double-rotation methods. This new direction-dependent flow correction allows the determination of the correct orthogonal wind vector components and hence the vertical turbulent fluxes. We applied the method to a 10 Hz dataset of 3D wind components, temperature, and the concentrations of carbon dioxide and water vapour, as measured on top of a military tower above the city of Münster in northwest Germany during spring and summer 2007. Significant differences appeared between the fluxes that were calculated with the standard rotation methods alone and those that underwent flow distortion correction prior to the application of the rotation methods. The highest deviations of 27% were obtained for the momentum flux. Pronounced differences of 15% and 8% were found for the diurnal net fluxes of carbon dioxide and water vapour, respectively. The flow distortion correction for the carbon dioxide fluxes yielded the same magnitude as the WPL (Webb,Pearman,Leuning) correction for density fluctuations. Copyright © 2009 Royal Meteorological Society [source]


Aboriginal deaths in Western Australia: 1985,89 and 1990,94

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 2 2000
Michael Gracey
Objective: To examine death data for Aboriginal and non-Aboriginal persons in Western Australia (WA) in 1985,89 and 1990,94. Methods: Population estimates were provided by the Health Information Centre of the WA Health Department based on data from the Australian Bureau of Statistics (ABS). Death data came from the WA Registrar-General's Office. Standard methods were used to obtain rates and levels of significance. Results: Main causes of deaths among Aboriginal males in 1990,94 were circulatory conditions, respiratory, injury and poisoning, neoplasms and endocrine diseases; in Aboriginal females they were circulatory, neoplasms, endocrine diseases, respiratory diseases, and injury and poisoning. From 1985,89 to 1990,94, the Aboriginal male all-cause age-standardised death rates fell 3% (ns) while the non-Aboriginal male rate fell 11% (p<0.05). The Aboriginal female all-cause death rate rose 11% (ns) while the non-Aboriginal rate fell 5% (p<0.05). The allcause death rate ratio (Aboriginahnon-Aboriginal) changed from 2.4 to 2.6 (males) and 2.5 to 2.9 (females). There was a major increase in deaths from endocrine diseases among Aborigines and non-Aborigines. This increase was proportionally much greater among Aborigines. In non-Aborigines there was a significant decrease in deaths from circulatory diseases (mainly ischaemic heart disease); this did not occur among Aborigines. Conclusions: Over the study period, Aboriginal health standards, as reflected by death rates, apparently worsened relative to non-Aboriginal standards. Implications: Better health promotion, disease prevention and disease care are required to help achieve acceptable health standards among Aboriginal peoples. [source]


Bayesian Distributed Lag Models: Estimating Effects of Particulate Matter Air Pollution on Daily Mortality

BIOMETRICS, Issue 1 2009
L. J. Welty
Summary A distributed lag model (DLagM) is a regression model that includes lagged exposure variables as covariates; its corresponding distributed lag (DL) function describes the relationship between the lag and the coefficient of the lagged exposure variable. DLagMs have recently been used in environmental epidemiology for quantifying the cumulative effects of weather and air pollution on mortality and morbidity. Standard methods for formulating DLagMs include unconstrained, polynomial, and penalized spline DLagMs. These methods may fail to take full advantage of prior information about the shape of the DL function for environmental exposures, or for any other exposure with effects that are believed to smoothly approach zero as lag increases, and are therefore at risk of producing suboptimal estimates. In this article, we propose a Bayesian DLagM (BDLagM) that incorporates prior knowledge about the shape of the DL function and also allows the degree of smoothness of the DL function to be estimated from the data. We apply our BDLagM to its motivating data from the National Morbidity, Mortality, and Air Pollution Study to estimate the short-term health effects of particulate matter air pollution on mortality from 1987 to 2000 for Chicago, Illinois. In a simulation study, we compare our Bayesian approach with alternative methods that use unconstrained, polynomial, and penalized spline DLagMs. We also illustrate the connection between BDLagMs and penalized spline DLagMs. Software for fitting BDLagM models and the data used in this article are available online. [source]


Principal Stratification Designs to Estimate Input Data Missing Due to Death

BIOMETRICS, Issue 3 2007
Constantine E. Frangakis
Summary We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure "input" variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure "output" variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable,a treatment or externally controlled variable, which if set at its "effective" level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common. [source]


Continuous Support for Women During Childbirth

BIRTH, Issue 1 2005
E.D. Hodnett
ABSTRACT Background:, Historically, women have been attended and supported by other women during labour. However, in recent decades in hospitals worldwide, continuous support during labour has become the exception rather than the routine. Concerns about the consequent dehumanization of women's birth experiences have led to calls for a return to continuous support by women for women during labour. Objectives:, Primary: to assess the effects, on mothers and their babies, of continuous, one-to-one intrapartum support compared with usual care. Secondary: to determine whether the effects of continuous support are influenced by: (1) routine practices and policies in the birth environment that may affect a woman's autonomy, freedom of movement, and ability to cope with labour; (2) whether the caregiver is a member of the staff of the institution; and (3) whether the continuous support begins early or later in labour. Search strategy:, We searched the Cochrane Pregnancy and Childbirth Group trials register (30 January 2003) and the Cochrane Central Register of Controlled Trials (The Cochrane Library, Issue 1, 2003). Selection criteria:, All published and unpublished randomized controlled trials comparing continuous support during labour with usual care. Data collection and analysis:, Standard methods of the Cochrane Collaboration Pregnancy and Childbirth Group were used. All authors participated in evaluation of methodological quality. Data extraction was undertaken independently by one author and a research assistant. Additional information was sought from the trial authors. Results are presented using relative risk for categorical data and weighted mean difference for continuous data. Main results:, Fifteen trials involving 12,791 women are included. Primary comparison: Women who had continuous intrapartum support were less likely to have intrapartum analgesia, operative birth, or to report dissatisfaction with their childbirth experiences. Subgroup analyses: In general, continuous intrapartum support was associated with greater benefits when the provider was not a member of the hospital staff, when it began early in labour, and in settings in which epidural analgesia was not routinely available. Reviewers' conclusions:, All women should have support throughout labour and birth. Citation:, Hodnett ED, Gates S, Hofmeyr G J, Sakala C. Continuous support for women during childbirth (Cochrane Review). In: The Cochrane Library, Issue 3, 2004. Chichester, UK: John Wiley & Sons, Ltd. ,,,The preceding report is an abstract of a regularly updated, systematic review prepared and maintained by the Cochrane Collaboration. The full text of the review is available in The Cochrane Library (ISSN 1464-780X). The Cochrane Library is designed and produced by Update Software Ltd, and published by John Wiley & Sons, Ltd. [source]


Single-cell image analysis to assess ABC-transporter,mediated efflux in highly purified hematopoietic progenitors

CYTOMETRY, Issue 4 2002
H.G.P. Raaijmakers
Abstract Background Normal and malignant hematopoietic stem cells are characterized by their capacity to actively extrude fluorescent dyes. The contribution of different ATP-binding cassette (ABC) transporters to this phenomenon is largely unknown due to the small stem cell numbers limiting the use of standard methods to assess functional efflux. Methods We used epifluorescence microscopy (EFM) in combination with single-cell image analysis to study ABC-transporter,mediated efflux in highly purified, viable, CD34+CD38- cells sorted on an adhesive biolayer. P-glycoprotein and multidrug-resistant protein (MRP)-mediated efflux were quantitated using fluorescent substrates (rhodamine-123 and calcein acetoxymethyl ester [calcein-AM]) and specific inhibitors (verapamil and probenecid, respectively). Results The feasibility, sensitivity, and reproducibility of rhodamine-123 efflux quantitation using single-cell EFM was shown in cell lines and compared with standard flow cytometric assessment. P-glycoprotein,mediated transport was higher in CD34+CD38- cells than in more differentiated progenitors (mean efflux index = 2.24 ± 0.35 and 1.14 ± 0.11, respectively; P = 0.01). P-glycoprotein,mediated transport was the main determinant of the rhodamine "dull" phenotype of these cells. In addition, significant MRP-mediated efflux was demonstrated in CD34+CD38- and CD38+ cells (mean efflux index = 1.42 ± 0.19 and 1.28 ± 0.18, respectively). Conclusion The described method is a valuable tool for assessing ABC-transporter,mediated efflux in highly purified single cells. Both P-glycoprotein and MRP-mediated efflux are present in human CD34+CD38- hematopoietic stem cells. Cytometry 49:135,142, 2002. © 2002 Wiley-Liss, Inc. [source]


Reliability of Computerized Emergency Triage

ACADEMIC EMERGENCY MEDICINE, Issue 3 2006
Sandy L. Dong MD
Objectives: Emergency department (ED) triage prioritizes patients based on urgency of care. This study compared agreement between two blinded, independent users of a Web-based triage tool (eTRIAGE) and examined the effects of ED crowding on triage reliability. Methods: Consecutive patients presenting to a large, urban, tertiary care ED were assessed by the duty triage nurse and an independent study nurse, both using eTRIAGE. Triage score distribution and agreement are reported. The study nurse collected data on ED activity, and agreement during different levels of ED crowding is reported. Two methods of interrater agreement were used: the linear-weighted , and quadratic-weighted ,. Results: A total of 575 patients were assessed over nine weeks, and complete data were available for 569 patients (99.0%). Agreement between the two nurses was moderate if using linear , (weighted ,= 0.52; 95% confidence interval = 0.46 to 0.57) and good if using quadratic , (weighted ,= 0.66; 95% confidence interval = 0.60 to 0.71). ED overcrowding data were available for 353 patients (62.0%). Agreement did not significantly differ with respect to periods of ambulance diversion, number of admitted inpatients occupying stretchers, number of patients in the waiting room, number of patients registered in two hours, or nurse perception of busyness. Conclusions: This study demonstrated different agreement depending on the method used to calculate interrater reliability. Using the standard methods, it found good agreement between two independent users of a computerized triage tool. The level of agreement was not affected by various measures of ED crowding. [source]


PROPHYLACTIC PANCREAS STENTING FOLLOWED BY NEEDLE-KNIFE FISTULOTOMY IN PATIENTS WITH SPHINCTER OF ODDI DYSFUNCTION AND DIFFICULT CANNULATION: NEW METHOD TO PREVENT POST-ERCP PANCREATITIS

DIGESTIVE ENDOSCOPY, Issue 1 2009
László Madácsy
Introduction:, The aim of the present study was to reduce post-endoscopic retrograde cholangiopancreatography (ERCP) complications with a combination of early needle-knife access fistulotomy and prophylactic pancreatic stenting in selected high-risk sphincter of Oddi dysfunction (SOD) patients with difficult cannulation. Methods:, Prophylactic pancreatic stent insertion was attempted in 22 consecutive patients with definite SOD and difficult cannulation. After 10 min of failed selective common bile duct cannulation, but repeated (>5×) pancreatic duct contrast filling, a prophylactic small calibre (3,5 Fr) pancreatic stent was inserted, followed by fistulotomy with a standard needle-knife, then a standard complete biliary sphincterotomy followed. The success and complication rates were compared retrospectively with a cohort of 35 patients, in which we persisted with the application of standard methods of cannulation without pre-cutting methods. Results:, Prophylactic pancreatic stenting followed by needle-knife fistulotomy was successfully carried out in all 22 consecutive patients, and selective biliary cannulation and complete endoscopic sphincterotomy were achieved in all but two cases. In this group, not a single case of post-ERCP pancreatitis was observed, in contrast with a control group of three mild, 10 moderate and two severe post-ERCP pancreatitis cases. The frequency of post-ERCP pancreatitis was significantly different: 0% versus 43%, as were the post-procedure (24 h mean) amylase levels: 206 U/L versus 1959 U/L, respectively. Conclusions:, In selected, high-risk, SOD patients, early, prophylactic pancreas stent insertion followed by needle-knife fistulotomy seems a safe and effective procedure with no or only minimal risk of post-ERCP pancreatitis. However, prospective, randomized studies are awaited to lend to support to our approach. [source]


Stochastic matrix models for conservation and management: a comparative review of methods

ECOLOGY LETTERS, Issue 3 2001
John Fieberg
Stochastic matrix models are frequently used by conservation biologists to measure the viability of species and to explore various management actions. Models are typically parameterized using two or more sets of estimated transition rates between age/size/stage classes. While standard methods exist for analyzing a single set of transition rates, a variety of methods have been employed to analyze multiple sets of transition rates. We review applications of stochastic matrix models to problems in conservation and use simulation studies to compare the performance of different analytic methods currently in use. We find that model conclusions are likely to be robust to the choice of parametric distribution used to model vital rate fluctuations over time. However, conclusions can be highly sensitive to the within-year correlation structure among vital rates, and therefore we suggest using analytical methods that provide a means of conducting a sensitivity analysis with respect to correlation parameters. Our simulation results also suggest that the precision of population viability estimates can be improved by using matrix models that incorporate environmental covariates in conjunction with experiments to estimate transition rates under a range of environmental conditions. [source]


System peaks in micellar electrophoresis: I. Utilization of system peaks for determination of critical micelle concentration

ELECTROPHORESIS, Issue 5 2008
Jana Lokajová
Abstract A new way to determine the critical micelle concentration (CMC) based on the mobilities of system peaks is presented. A general approach for the CMC determination is based on the change of the slope or on finding the inflection point in the plot of a physical property of solution as a function of surfactant concentration. The determination of CMC by system peaks in CE utilizes a "jump" instead of a continuous change in the measured quantity. This phenomenon was predicted by the program PeakMaster, which was modified for simulation of micellar systems. The simulation of the steep change in mobilities of the anionic system peaks showing the CMC value was verified experimentally in a set of measurements, where the concentration of the surfactant was varied while the ionic strength was kept constant. The experimental work fully proved our model. A comparative electric current measurement was carried out. The proposed method seems to offer easier CMC determination as compared to the standard methods. [source]


A survey on two years of medication regulation in horse races in Iran

EQUINE VETERINARY JOURNAL, Issue 2 2010
S. LOTFOLLAHZADEH
Summary Reasons for performing study: The present survey evaluated the use of prohibited substances cases in the first 2 years of medication regulation in horseracing in Iran so that the impact of these regulations on the level of positive cases over the period could be assessed. Objectives: To determine the prevalence of positive tests for prohibited substances in horse races during 2 years of a drugs testing programme in Iran. Methods: A total of 656 horses that were winners or second in races were tested during the 2 year study. In the first year 354 horses (209 males and 145 females) and in the second year 302 horses (155 males and 147 females) were tested. In the 2 years, 306 were found to be positive. Urine samples were taken from candidate horses and sent to the Central Doping Laboratory. Blood samples were taken from those horses where a urine sample could not be taken within one hour. Detection and measurement of prohibited substances were carried out by ELISA, GC and HPLC using standard methods. Results: Thirty-two percent of males were positive for prohibited substances, which was not significantly different from the percentage of females (25.5%). In the second year, of the 302 horses tested for prohibited substances, 33.5% of males were positive, again similar to females (33.3%). Almost 83% of horses tested positive for prohibited substances once in the first year, 15% tested positive twice and 2% tested positive 3 times. In the second year 78% tested positive once, 15% tested positive twice and 7% tested positive 3 times. Morphine was the most used prohibited substance and was detected 42 times during the survey, followed by caffeine and phenylbutazone. Morphine was also the most used drug in combination with other drugs in both years. Conclusions: Morphine and caffeine were the most popular prohibited substances found in the measurements. As these substances were found in the environment and food stuffs, their presence in the samples may be due to unintentional feeding of contaminated materials (bread, hay and chocolate). [source]


Circulating mononuclear cells nuclear factor-kappa B activity, plasma xanthine oxidase, and low grade inflammatory markers in adult patients with familial hypercholesterolaemia

EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 2 2010
J. T. Real
Eur J Clin Invest 2010; 40 (2): 89,94 Abstract Background, Few data are available on circulating mononuclear cells nuclear factor-kappa B (NF-kB) activity and plasma xanthine oxidase (XO) activity in heterozygous familial hypercholesterolaemia (FH). The goal of the study was to analyse circulating mononuclear cells NF-kB and plasma XO activities in FH patients. Materials and methods, Thirty FH index patients and 30 normoglycaemic normocholesterolaemic controls matched by age, gender, body mass index, abdominal circumference and homeostasis model assessment index were studied. Plasma XO and inflammatory markers were measured by standard methods. NF-kB was assayed in circulating mononuclear cells. Results, Familial hypercholesterolaemia patients showed a significantly higher NF-kB (75·0 ± 20·7 vs. 42·7 ± 16·8 relative luminiscence units) and XO (0·44 ± 0·13 vs. 0·32 ± 0·09 mU mL,1) activities than controls. In addition, interleukin-1, interleukin-6, high sensitivity C reactive protein (hsCRP) and oxidized LDL (LDL-ox) were also significantly higher in FH patients. In the total group (FH and controls), XO was significantly associated with LDL-cholesterol (LDL-C), apolipoprotein B (apoB), NF-kB and hsCPR, and NF-kB activity was significantly associated with XO, hsCPR, LDL-ox, LDL-C and apoB plasma values. Using multiple regression analysis, XO was independently associated with hsCPR and NF-kB, and NF-kB activity in circulating mononuclear cells was independently associated with apoB and LDL-ox plasma values. Conclusion, Familial hypercholesterolaemia patients show increased activities of NF-kB and XO, and higher values of low grade inflammatory markers related to atherosclerosis. NF-kB activity was independently associated with apoB plasma values. These data could explain in part the high cardiovascular disease risk present in these patients. [source]


Technical basis on structural fire resistance design in building standards law of Japan

FIRE AND MATERIALS, Issue 2-4 2004
Kazunori Harada
Abstract Structural fire resistance design method came into effect due to the revision of Japan's building code (building standards law of Japan) in June 2001. The method includes standard methods to calculate (1) fire exposure to structural elements, (2) temperature rise of steel and RC elements during fire exposure and (3) structural end points such as ultimate steel temperature for buckling of columns, bending failure of beams and so on. This paper discusses the technical basis for design methods especially focused on steel framed buildings. The calculated values by design equations were compared with experimental values in order to examine the redundancies implied. In the final stage, all the redundancies were combined by Monte-Carlo method and first-order moment method (AFORM). Target safety index and corresponding partial safety factors were discussed. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Evaluating the physiological and physical consequences of capture on post-release survivorship in large pelagic fishes

FISHERIES MANAGEMENT & ECOLOGY, Issue 2 2007
G. B. SKOMAL
Abstract, Sharks, tunas and billfishes are fished extensively throughout the world. Domestic and international management measures (quotas, minimum sizes, bag limits) mandate release of a large, yet poorly quantified, number of these fishes annually. Post-release survivorship is difficult to evaluate, because standard methods are not applicable to large oceanic fishes. This paper presents information on the current approaches to characterising capture stress and survivorship in sharks, tunas and marlins. To assess mortality associated with capture stress, researchers must examine the cumulative impacts of physical trauma and physiological stress. Physical trauma, manifested as external and internal tissue and organ damage, is caused by fishing gear and handling. Gross examination and histopathological sampling have been used to assess physical trauma and to infer post-release survivorship. Exhaustive anaerobic muscular activity and time out of water cause physiological stress, which has been quantified in these fishes through the analyses of blood chemistry. Conventional, acoustic and archival tagging have been used to assess post-release survivorship in these species. Future studies relating capture stress and post-release survivorship could yield information that helps fishermen increase survivorship when practicing catch and release. [source]


Assessment of European streams with diatoms, macrophytes, macroinvertebrates and fish: a comparative metric-based analysis of organism response to stress

FRESHWATER BIOLOGY, Issue 9 2006
DANIEL HERING
Summary 1. Periphytic diatoms, macrophytes, benthic macroinvertebrates and fish were sampled with standard methods in 185 streams in nine European countries to compare their response to degradation. Streams were classified into two main stream type groups (i.e. lowland, mountain streams); in addition, the lowland streams were grouped into four more specific stream types. 2. Principal components analysis with altogether 43 environmental parameters was used to construct complex stressor gradients for physical,chemical, hydromorphological and land use data. About 30 metrics were calculated for each sample and organism group. Metric responses to different stress types were analysed by Spearman Rank Correlation. 3. All four organism groups showed significant response to eutrophication/organic pollution gradients. Generally, diatom metrics were most strongly correlated to eutrophication gradients (85% and 89% of the diatom metrics tested correlated significantly in mountain and lowland streams, respectively), followed by invertebrate metrics (91% and 59%). 4. Responses of the four organism groups to other gradients were less strong; all organism groups responded to varying degrees to land use changes, hydromorphological degradation on the microhabitat scale and general degradation gradients, while the response to hydromorphological gradients on the reach scale was mainly limited to benthic macroinvertebrates (50% and 44% of the metrics tested correlated significantly in mountain and lowland streams, respectively) and fish (29% and 47%). 5. Fish and macrophyte metrics generally showed a poor response to degradation gradients in mountain streams and a strong response in lowland streams. 6. General recommendations on European bioassessment of streams were derived from the results. [source]


Gamma regression improves Haseman-Elston and variance components linkage analysis for sib-pairs

GENETIC EPIDEMIOLOGY, Issue 2 2004
Mathew J. Barber
Abstract Existing standard methods of linkage analysis for quantitative phenotypes rest on the assumptions of either ordinary least squares (Haseman and Elston [1972] Behav. Genet. 2:3,19; Sham and Purcell [2001] Am. J. Hum. Genet. 68:1527,1532) or phenotypic normality (Almasy and Blangero [1998] Am. J. Hum. Genet. 68:1198,1199; Kruglyak and Lander [1995] Am. J. Hum. Genet. 57:439,454). The limitations of both these methods lie in the specification of the error distribution in the respective regression analyses. In ordinary least squares regression, the residual distribution is misspecified as being independent of the mean level. Using variance components and assuming phenotypic normality, the dependency on the mean level is correctly specified, but the remaining residual coefficient of variation is constrained a priori. Here it is shown that these limitations can be addressed (for a sample of unselected sib-pairs) using a generalized linear model based on the gamma distribution, which can be readily implemented in any standard statistical software package. The generalized linear model approach can emulate variance components when phenotypic multivariate normality is assumed (Almasy and Blangero [1998] Am. J. Hum Genet. 68: 1198,1211) and is therefore more powerful than ordinary least squares, but has the added advantage of being robust to deviations from multivariate normality and provides (often overlooked) model-fit diagnostics for linkage analysis. Genet Epidemiol 26:97,107, 2004. © 2004 Wiley-Liss, Inc. [source]


Analysis of multilocus models of association

GENETIC EPIDEMIOLOGY, Issue 1 2003
B. Devlin
Abstract It is increasingly recognized that multiple genetic variants, within the same or different genes, combine to affect liability for many common diseases. Indeed, the variants may interact among themselves and with environmental factors. Thus realistic genetic/statistical models can include an extremely large number of parameters, and it is by no means obvious how to find the variants contributing to liability. For models of multiple candidate genes and their interactions, we prove that statistical inference can be based on controlling the false discovery rate (FDR), which is defined as the expected number of false rejections divided by the number of rejections. Controlling the FDR automatically controls the overall error rate in the special case that all the null hypotheses are true. So do more standard methods such as Bonferroni correction. However, when some null hypotheses are false, the goals of Bonferroni and FDR differ, and FDR will have better power. Model selection procedures, such as forward stepwise regression, are often used to choose important predictors for complex models. By analysis of simulations of such models, we compare a computationally efficient form of forward stepwise regression against the FDR methods. We show that model selection includes numerous genetic variants having no impact on the trait, whereas FDR maintains a false-positive rate very close to the nominal rate. With good control over false positives and better power than Bonferroni, the FDR-based methods we introduce present a viable means of evaluating complex, multivariate genetic models. Naturally, as for any method seeking to explore complex genetic models, the power of the methods is limited by sample size and model complexity. Genet Epidemiol 25:36,47, 2003. © 2003 Wiley-Liss, Inc. [source]


Detection of Helicobacter pylori DNA by a Simple Stool PCR Method in Adult Dyspeptic Patients

HELICOBACTER, Issue 4 2005
Nazime
ABSTRACT Introduction.,Helicobacter pylori is the major agent causing peptic ulcer, gastric cancer and mucosa-associated lymphoid tissue (MALT) gastric lymphoma. A simple stool polymerase chain reaction (PCR) method was performed and compared with the gold standards for the diagnosis of H. pylori infection. Material and methods., A total of 54 adult patients (mean age, 46.41 ± 13.12 years) with dyspeptic symptoms from Gastroenterology at Dokuz Eylül University Hospital between May and November 2003 were included. Two antrum and corpus biopsies were taken from each patient. Infection by H. pylori was defined as positivity and negativity of the gold standards. DNA extraction of stool specimens was done using QIAamp DNA Stool Mini Kit (QIAGEN) and PCR conditions included amplification and reamplification steps using the H. pylori ureA gene specific primers (HPU1, HPU2) and were visualized on 1% agarose gel stained with ethidium bromide. Results., Forty-six of 54 patients (85.2%) were diagnosed positive and eight (14.8%) were negative for H. pylori infection by the gold standard methods. Thirty-two patients were positive (59.3%) and 22 of them (40.7%) were detected negative by stool PCR method. The stool PCR method and gold standard methods showed a statistical difference for the detection of H. pylori infection (p < .0001). Sensitivity, specificity, likelihood ratio, and positive and negative predictive values were 65.22%, 75%, 2.61%, 93.75%, and 27.7%, respectively. Discussion., The PCR on the stool specimens resulted as being a very specific test. We suggest that a simple stool PCR method that we developed can be used to detect H. pylori, virulence genes, and in drug resistance studies either first line diagnostic methods in the laboratory or in the clinical management of dyspeptic patients. [source]


Metaplastic breast carcinomas are basal-like tumours

HISTOPATHOLOGY, Issue 1 2006
J S Reis-Filho
Aims :,Recently, an immunohistochemical panel comprising antibodies against HER2, oestrogen receptor (ER), epidermal growth factor receptor (EGFR) and cytokeratin (CK) 5/6 was reported to identify basal-like breast carcinomas, as defined by cDNA microarrays. Our aim was to analyse a series of metaplastic breast carcinomas (MBCs) using this panel plus two other basal markers (CK14 and p63) and progesterone receptor (PR), to define how frequently MBCs show a basal-like immunophenotype. Methods and results :,Sixty-five cases were retrieved from the pathology archives of the authors' institutions and reviewed by three of the authors. Immunohistochemistry with antibodies for HER2, ER, EGFR, CK5/6, CK14 and p63 was performed according to standard methods. All but six cases (91%) showed the typical immunoprofile of basal-like tumours (ER, and HER2,, EGFR+ and/or CK5/6+). When CK14 and p63 were added to the panel, two additional cases could be classified as basal-like. The majority of MBCs lacked PR, except 4/19 (21%) carcinomas with squamous metaplasia. Conclusions :,Our results demonstrate that MBCs show a basal-like phenotype, regardless of the type of metaplastic elements. Moreover, as these neoplasms frequently overexpress EGFR (57%), patients with MBC may benefit from treatment with anti-EGFR drugs. [source]


Effect of concurrent zidovudine use on the resistance pathway selected by abacavir-containing regimens

HIV MEDICINE, Issue 6 2004
ER Lanier
Objectives Abacavir (ABC) selects for four mutations (K65R, L74V, Y115F and M184V) in HIV-1 reverse transcriptase (RT), both in vitro and during monotherapy in vivo. The aim of this analysis was to compare the selection of these and other nucleoside reverse transcriptase inhibitor (NRTI)-associated mutations by ABC-containing therapies in the presence and absence of concurrent lamivudine (3TC) and/or zidovudine (ZDV) and to assess the effect of these mutations on phenotypic susceptibility to the NRTIs. Design This study was a retrospective analysis of the patterns of NRTI-associated mutations selected following virological failure in six multicentre trials conducted during the development of ABC. Methods Virological failure was defined as confirmed vRNA above 400 HIV-1 RNA copies/mL. RT genotype and phenotype were determined using standard methods. Results K65R was selected infrequently by ABC-containing regimens in the absence of ZDV (13 of 127 patients), while L74V/I was selected more frequently (51 of 127 patients). Selection of both K65R and L74V/I was significantly reduced by co-administration of ZDV with ABC (one of 86 and two of 86 patients, respectively). Y115F was uncommon in the absence (seven of 127 patients) or presence (four of 86 patients) of ZDV. M184V was the most frequently selected mutation by ABC alone (24 of 70 patients) and by ABC plus 3TC (48 of 70 patients). Thymidine analogue mutations were associated with ZDV use. The K65R mutation conferred the broadest phenotypic cross-resistance of the mutations studied. Conclusions The resistance pathway selected upon virological failure of ABC-containing regimens is significantly altered by concurrent ZDV use, but not by concurrent 3TC use. These data may have important implications for the efficacy of subsequent lines of NRTI therapies. [source]