Home About us Contact | |||
Incomplete Data (incomplete + data)
Selected AbstractsExact Maximum Likelihood Estimation of an ARMA(1, 1) Model with Incomplete DataJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2002CHUNSHENG MA For a first-order autoregressive and first-order moving average model with nonconsecutively observed or missing data, the closed form of the exact likelihood function is obtained, and the exact maximum likelihood estimation of parameters is derived in the stationary case. [source] Integrating highly diverse invertebrates into broad-scale analyses of cross-taxon congruence across the PalaearcticECOGRAPHY, Issue 6 2009Andreas Schuldt Our knowledge on broad-scale patterns of biodiversity, as a basis for biogeographical models and conservation planning, largely rests upon studies on the spatial distribution of vertebrates and plants, neglecting large parts of the world's biodiversity. To reassess the generality of these patterns and better understand spatial diversity distributions of invertebrates, we analyzed patterns of species richness and endemism of a hyperdiverse insect taxon, carabid beetles (ca 11 000 Palaearctic species known), and its cross-taxon congruence with well-studied vertebrates (amphibians, reptiles) and plants across 107,units of the Palaearctic. Based on species accumulation curves, we accounted for completeness of the carabid data by separately examining the western (well-sampled) and eastern (partly less well-sampled) Palaearctic and China (deficient data). For the western Palaearctic, we highlight overall centers of invertebrate, vertebrate and plant diversity. Species richness and endemism of carabids were highly correlated with patterns of especially plant and amphibian diversity across large parts of the Palaearctic. For the well-sampled western Palaearctic, hotspots of diversity integrating invertebrates were located in Italy, Spain and Greece. Only analysis of Chinese provinces yielded low congruence between carabids and plants/vertebrates. However, Chinese carabid diversity is only insufficiently known and China features the highest numbers of annual new descriptions of carabids in the Palaearctic. Even based on the incomplete data, China harbors at least 25% of all Palaearctic carabid species. Our study shows that richness and endemism patterns of highly diverse insects can exhibit high congruence with general large scale patterns of diversity inferred from plants/vertebrates and that hotspots derived from the latter can also include a high diversity of invertebrates. In this regard, China qualifies as an outstanding multi-taxon hotspot of diversity, requiring intense biodiversity research and conservation effort. Our findings extend the limited knowledge on broad-scale invertebrate distributions and allow for a better understanding of diversity patterns across a larger range of the world's biodiversity than usually considered. [source] The effect of nightshift on emergency registrars' clinical skillsEMERGENCY MEDICINE AUSTRALASIA, Issue 3 2010Leonie Marcus Abstract Objective: The effect of nightshift on ED staff performance is of clinical and risk-management significance. Previous studies have demonstrated deterioration in psychomotor skills but the present study specifically assessed the impact of nightshift on clinical performance. Methods: The ED registrars in a tertiary hospital were enrolled in a prospective observational study and served as their own controls. During nightshift, subjects were presented simulated scenarios and tested with eight clinical questions developed to Fellowship examination standard. Matched scenarios and questions for the same subjects during dayshift served as controls. Two investigators, blinded to subject identity and the setting in which questions were attempted, independently collated answers. Results: Of 22 eligible subjects, all were recruited; four were excluded owing to incomplete data. A correlation of 0.99 was observed between the independent scoring investigators. Of a possible score of 17, the median result for nightshift was 9.5 (interquartile range: 8,11); corresponding value for dayshift was 12 (interquartile range: 10,13); P= 0.047. Conclusion: Nightshift effect on clinical performance is anecdotally well known. The present study quantifies such effects, specifically for the ED setting, and paves the way for focused research. The implications for clinical governance strategies are significant, as the fraternity embraces the mandate to maintain quality emergency care 24 h per day. [source] Minimum weighted norm wavefield reconstruction for AVA imagingGEOPHYSICAL PROSPECTING, Issue 6 2005Mauricio D. Sacchi ABSTRACT Seismic wavefield reconstruction is posed as an inversion problem where, from inadequate and incomplete data, we attempt to recover the data we would have acquired with a denser distribution of sources and receivers. A minimum weighted norm interpolation method is proposed to interpolate prestack volumes before wave-equation amplitude versus angle imaging. Synthetic and real data were used to investigate the effectiveness of our wavefield reconstruction scheme when preconditioning seismic data for wave-equation amplitude versus angle imaging. [source] Hydro-climatic impacts on the ice cover of the lower Peace RiverHYDROLOGICAL PROCESSES, Issue 17 2008Spyros Beltaos Abstract Since the late 1960s, a paucity of ice-jam flooding in the lower Peace River has resulted in prolonged dry periods and considerable reduction in the area covered by lakes and ponds that provide habitat for aquatic life in the Peace,Athabasca Delta (PAD) region. Though major ice jams occur at breakup, antecedent conditions play a significant role in their frequency and severity. These conditions are partly defined by the mode of freezeup and the maximum thickness that is attained during the winter, shortly before the onset of spring and development of positive net heat fluxes to the ice cover. Data from hydrometric gauge records and from field surveys are utilized herein to study these conditions. It is shown that freezeup flows are considerably larger at the present time than before regulation, and may be responsible for more frequent formation of porous accumulation covers. Despite a concomitant rise in winter temperatures, solid-ice thickness has increased since the 1960s. Using a simple ice growth model, specifically developed for the study area, it is shown that porous accumulation covers enhance winter ice growth via accelerated freezing into the porous accumulation. Coupled with a reduction in winter snowfall, this effect can not only negate, but reverse, the effect of warmer winters on ice thickness, thus explaining present conditions. The present model is also shown to be a useful prediction tool, especially for extrapolating incomplete data to the end of the winter. Copyright © 2007 Crown in the right of Canada. Published by John Wiley & Sons, Ltd. [source] Optimizing object classification under ambiguity/ignorance: application to the credit rating problemINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 2 2005Malcolm J. Beynon A nascent technique for object classification is employed to exposit the classification of US banks to their financial strength ratings, presented by the Moody's Investors Services. The classification technique primarily utilized, called CaRBS (classification and ranking belief simplex), allows for the presence of ignorance to be inherent. The modern constrained optimization method, trigonometric differential evolution (TDE), is adopted to configure a CaRBS system. Two different objective functions are considered with TDE to measure the level of optimization achieved, which utilize differently the need to reduce ambiguity and/or ignorance inherently during the optimization process. The appropriateness of the CaRBS system to analyse incomplete data is also highlighted, with no requirement to impute any missing values or remove objects with missing values inherent. Comparative results are also presented using the well-known multivariate discriminant analysis and neural network models. The findings in this study identify a novel dimension to the issue of object classification optimization, with the discernment between the concomitant notions of ambiguity and ignorance. Copyright © 2005 John Wiley & Sons, Ltd. [source] From marine ecology to crime analysis: Improving the detection of serial sexual offences using a taxonomic similarity measureJOURNAL OF INVESTIGATIVE PSYCHOLOGY AND OFFENDER PROFILING, Issue 1 2007Jessica Woodhams Abstract Jaccard has been the choice similarity metric in ecology and forensic psychology for comparison of sites or offences, by species or behaviour. This paper applies a more powerful hierarchical measure,taxonomic similarity (,s), recently developed in marine ecology,to the task of behaviourally linking serial crime. Forensic case linkage attempts to identify behaviourally similar offences committed by the same unknown perpetrator (called linked offences). ,s considers progressively higher-level taxa, such that two sites show some similarity even without shared species. We apply this index by analysing 55 specific offence behaviours classified hierarchically. The behaviours are taken from 16 sexual offences by seven juveniles where each offender committed two or more offences. We demonstrate that both Jaccard and ,s show linked offences to be significantly more similar than unlinked offences. With up to 20% of the specific behaviours removed in simulations, ,s is equally or more effective at distinguishing linked offences than where Jaccard uses a full data set. Moreover, ,s retains significant difference between linked and unlinked pairs, with up to 50% of the specific behaviours removed. As police decision-making often depends upon incomplete data, ,s has clear advantages and its application may extend to other crime types. Copyright © 2007 John Wiley & Sons, Ltd. [source] A systematic review of controlled trials evaluating interventions in adult literacy and numeracyJOURNAL OF RESEARCH IN READING, Issue 2 2005Carole Torgerson This paper reports a systematic review of the quasi-experimental literature in the field of adult literacy and numeracy, published between 1980 and 2002. We included 27 controlled trials (CTs) that evaluated strategies and pedagogies designed to increase adult literacy and numeracy: 18 CTs with no effect sizes (incomplete data) and 9 CTs with full data. These nine trials are examined in detail for this paper. Of these nine trials, six evaluated interventions in literacy and three evaluated interventions in literacy and numeracy. Three of the nine trials showed a positive effect for the interventions, five trials showed no difference and one trial showed a positive effect for the control treatment. The quality of the trials was variable, but many of them had some methodological problems. There was no evidence of publication bias in the review. There have been few attempts to expose common adult literacy or numeracy programmes to rigorous evaluation and therefore in terms of policy and practice it is difficult to make any recommendations as to the type of adult education that should be supported. In contrast, however, the review does provide a strong steer for the direction to be taken by educational researchers: because of the present inadequate evidence base rigorously designed randomised controlled trials and quasi-experiments are required as a matter of urgency. [source] Standard errors for EM estimationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2000M. Jamshidian The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. [source] A multifaceted sensitivity analysis of the Slovenian public opinion survey dataJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2009Caroline Beunckens Summary., Many models to analyse incomplete data have been developed that allow the missing data to be missing not at random. Awareness has grown that such models are based on unverifiable assumptions, in the sense that they rest on the (incomplete) data only in part, but that inferences nevertheless depend on what the model predicts about the unobserved data, given the observed data. This explains why, nowadays, considerable work is being devoted to assess how sensitive models for incomplete data are to the particular model chosen, a family of models chosen and the effect of (a group of) influential subjects. For each of these categories, several proposals have been formulated, studied theoretically and/or by simulations, and applied to sets of data. It is, however, uncommon to explore various sensitivity analysis avenues simultaneously. We apply a collection of such tools, some after extension, to incomplete counts arising from cross-classified binary data from the so-called Slovenian public opinion survey. Thus for the first time bringing together a variety of sensitivity analysis tools on the same set of data, we can sketch a comprehensive sensitivity analysis picture. We show that missingness at random estimates of the proportion voting in favour of independence are insensitive to the precise choice of missingness at random model and close to the actual plebiscite results, whereas the missingness not at random models that are furthest from the plebiscite results are vulnerable to the influence of outlying cases. Our approach helps to illustrate the value of comprehensive sensitivity analysis. Ideas are formulated on the methodology's use beyond the data analysis that we consider. [source] Clinical trial: effects of botulinum toxin on levator ani syndrome , a double-blind, placebo-controlled studyALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 9 2009S. S. C. RAO Summary Background, Levator ani syndrome is characterized by anorectal discomfort/pain, treatment of which is unsatisfactory. We hypothesized that Botulinum toxin relieves spasm and improves symptoms. Aim, To perform a randomized, placebo-controlled, crossover study to examine the efficacy and safety of botulinum toxin in patients with levator ani syndrome. Methods, Twelve patients with levator ani syndrome (,1 year) received anal intra sphincteric injections of 100 units of botulinum toxin A and placebo at 90-day intervals using EMG guidance. Daily frequency, severity, duration and intensity of pain (VAS) were recorded. Anorectal manometry, balloon expulsion and pudendal nerve latency tests were performed to examine the physiological changes and adverse effects. Results, Seven patients (male/female = 4/3) completed the study and three had incomplete data, but all 10 underwent in an ITT analysis; two others dropped out. After administration of botulinum toxin, the mean frequency, intensity and duration of pain were unchanged (P = 0.31) compared with baseline. The 90-day mean VAS pain score was 6.79 ± 0.27 vs. baseline score of 7.08 ± 0.29 (P = 0.25). Anal sphincter pressures, rectal sensory thresholds, pudendal nerve latency and balloon expulsion times were unchanged after drug or placebo administration. Conclusions, Injection of botulinum toxin into anal sphincter is safe, but it does not improve anorectal pain in levator ani syndrome. [source] Short-term postliver transplant survival after the introduction of MELD scores for organ allocation in the United StatesLIVER INTERNATIONAL, Issue 3 2005Hwan Y. Yoo Abstract Background: It has been suggested that the introduction of model for end-stage liver disease (MELD) for organ allocation may reduce overall graft and patient survival since elevated serum creatinine is an important predictor of poor outcome after liver transplantation. Objective: In this study, we determined the outcomes of liver transplantation before (PreMELD group, 1998,February, 2002) and after (MELD group, March,December, 2002, n=4642) the introduction of MELD score, and examined the impact of MELD scores on the outcome in the United States (US). Patients & methods: After excluding patients for a variety of reasons (children, live-donor, fulminant liver failure, patients with hepatoma and others who received extra MELD points, multiple organ transplantation, re-transplantation, incomplete data), there were 3227 patients in the MELD group. These patients were compared with 14 593 patients in the preMELD group after applying similar exclusion criteria. The survival was compared using Kaplan,Meier survival analysis and Cox regression survival analysis. Results: There was no difference in short-term (up to 10 months) graft and patient survival between MELD and preMELD groups. However, graft and patient survival was lower in patients with MELD score ,30 when compared with those with MELD score <30 after adjusting for the confounding variables. Conclusion: Introduction of MELD score for organ prioritization has not reduced the short-term survival of patients, but patients with MELD score of 30 or higher had a relatively poor outcome. [source] Graft and patient survival after adult live donor liver transplantation compared to a matched cohort who received a deceased donor transplantationLIVER TRANSPLANTATION, Issue 10 2004Paul J. Thuluvath Live donor liver transplantation (LDLT) has become increasingly common in the United States and around the world. In this study, we compared the outcome of 764 patients who received LDLT in the United States and compared the results with a matched population that received deceased donor transplantation (DDLT) using the United Network for Organ Sharing (UNOS) database. For each LDLT recipient (n = 764), two DDLT recipients (n = 1,470), matched for age, gender, race, diagnosis, and year of transplantation, were selected from the UNOS data after excluding multiple organ transplantation or retransplantation, children, and those with incomplete data. Despite our matching, recipients of LDLT had more stable liver disease, as shown by fewer patients with UNOS status 1 or 2A, in an intensive care unit, or on life support. Creatinine and cold ischemia time were also lower in the LDLT group. Primary graft nonfunction, hyperacute rejection rates, and patient survival by Kaplan-Meier analysis were similar in both groups (2-year survival was 79.0% in LDLT vs. 80.7% in case-controls; P = .5), but graft survival was significantly lower in LDLT (2-year graft survival was 64.4% vs. 73.3%; P < .001). Cox regression (after adjusting for confounding variables) analysis showed that LDLT recipients were 60% more likely to lose their graft compared to DDLT recipients (hazard ratio [HR] 1.6; confidence interval 1.1-2.5). Among hepatitis C virus (HCV) patients, LDLT recipients showed lower graft survival when compared to those who received DDLT. In conclusion, short-term patient survival in LDLT is similar to that in the DDLT group, but graft survival is significantly lower in LDLT recipients. LDLT is a reasonable option for patients who are unlikely to receive DDLT in a timely fashion. (Liver Transpl 2004;10:1263,1268.) [source] Radial single-shot STEAM MRIMAGNETIC RESONANCE IN MEDICINE, Issue 4 2008Kai Tobias Block Abstract Rapid MR imaging using the stimulated echo acquisition mode (STEAM) technique yields single-shot images without any sensitivity to resonance offset effects. However, the absence of susceptibility-induced signal voids or geometric distortions is at the expense of a somewhat lower signal-to-noise ratio than EPI. As a consequence, the achievable spatial resolution is limited when using conventional Fourier encoding. To overcome the problem, this study combined single-shot STEAM MRI with radial encoding. This approach exploits the efficient undersampling properties of radial trajectories with use of a previously developed iterative image reconstruction method that compensates for the incomplete data by incorporating a priori knowledge. Experimental results for a phantom and human brain in vivo demonstrate that radial single-shot STEAM MRI may exceed the resolution obtainable by a comparable Cartesian acquisition by a factor of four. Magn Reson Med 59:686,691, 2008. © 2008 Wiley-Liss, Inc. [source] Optimum survey methods when interviewing employed womenAMERICAN JOURNAL OF INDUSTRIAL MEDICINE, Issue 2 2009Kari Dunning PhD Abstract Background While survey studies have examined bias much is unknown regarding specific subpopulations, especially women workers. Methods A population based phone, Internet, and mail survey of workplace falls during pregnancy was undertaken. Participation by industry and occupation and survey approach and bias, reliability, and incomplete data were examined. Results Of the 3,997 women surveyed, 71% were employed during their pregnancy. Internet responders were most likely to be employed while pregnant and to report a workplace fall at 8.8% compared to 5.8% and 6.1% for mail and phone respondents. Internet responders had the most missing employment data with company name missing for 17.9% compared to 1.3% for phone responders. Mail surveys were best for recruiting those employed in eight of nine industries, and this was especially true for service occupations. Conclusions To decrease bias and increase participation, mixed approaches may be useful with particular attention for collecting occupational data. Am. J. Ind. Med. 52:105,112, 2009. © 2008 Wiley-Liss, Inc. [source] The significance of volcanic eruption strength and frequency for climateTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 602 2004G. M. Miles Abstract A simple physical model of the atmospheric effects of large explosive volcanic eruptions is developed. Using only one input parameter,the initial amount of sulphur dioxide injected into the stratosphere,the global-average stratospheric optical-depth perturbation and surface temperature response are modelled. The simplicity of this model avoids issues of incomplete data (applicable to more comprehensive models), making it a powerful and useful tool for atmospheric diagnostics of this climate forcing mechanism. It may also provide a computationally inexpensive and accurate way of introducing volcanic activity into larger climate models. The modelled surface temperature response for an initial sulphur-dioxide injection, coupled with emission-history statistics, is used to demonstrate that the most climatically significant volcanic eruptions are those of sufficient explosivity to just reach into the stratosphere (and achieve longevity). This study also highlights the fact that this measure of significance is highly sensitive to the representation of the climatic response and the frequency data used, and that we are far from producing a definitive history of explosive volcanism for at least the past 1000 years. Given this high degree of uncertainty, these results suggest that eruptions that release around and above 0.1 Mt SO2 into the stratosphere have the maximum climatic impact. Copyright © 2004 Royal Meteorological Society [source] Crystallographic rationalization of the reactivity and spectroscopic properties of (2R)- S -(2,5-dihydroxyphenyl)cysteineACTA CRYSTALLOGRAPHICA SECTION C, Issue 4 2010Gabriele Kociok-Köhn At 150,K, the title compound, C9H11NO4S, crystallizes in the orthorhombic form as a zwitterion and has a low gauche conformation [, = ,46.23,(16)°] for an acyclic cysteine derivative. A difference in bond length is observed for the alkyl C,S bond [1.8299,(15),Å] and the aryl C,S bond [1.7760,(15),Å]. The ,NH3+ group is involved in four hydrogen bonds, two of which are intermolecular and two intramolecular. The compound forms an infinite three-dimensional network constructed from four intermolecular hydrogen bonds. Characterization data (13C NMR, IR and optical rotation) are reported to supplement the incomplete data disclosed previously in the literature. [source] The Importance of Emergency Medicine in Organ Donation: Successful Donation Is More Likely When Potential Donors Are Referred From the Emergency DepartmentACADEMIC EMERGENCY MEDICINE, Issue 9 2009Glen E. Michael MD Abstract Objectives:, This study sought to identify factors that are associated with successful organ retrieval among patients referred to organ procurement services for potential organ donation. Particular attention was paid to the frequency, patient characteristics, and outcomes of patients referred for donation from the emergency department (ED). Methods:, For this retrospective cohort study, data were collected on all solid-organ donor referrals made to a single organ procurement organization serving 78 hospitals over a 45-month period. Data retrieved included patient age, sex, race, referral site (ED vs. inpatient), and mechanism of injury. Outcome of referral (organs retrieved or not) was the primary outcome variable. Pearson chi-square and Student's t-tests were used for bivariate statistical analysis. Multiple logistic regression analysis was used to determine which variables remained associated with organ retrieval after controlling for potential confounders. Results:, A total of 6,886 donor referrals were made in the study population. Of these, 155 were excluded due to incomplete data, leaving 6,731 subjects for analysis. Using bivariate statistical analysis, we found that successful organ retrieval was associated with younger age (donor mean age 40.8 years, 95% confidence interval [CI] = 39.1 to 42.5 vs. nondonor mean age 59.4, 95% CI = 58.9 to 59.9), mechanism of injury (p < 0.001), and referral from the ED (ED 15.5% retrieved, inpatient 5.9%, odds ratio [OR] = 2.92, 95% CI = 2.32 to 3.67). After controlling for potential confounders with multiple logistic regression, referral from the ED remained significantly associated with successful organ retrieval (OR = 1.52, 95% CI = 1.18 to 1.97), as did age (OR = 0.96, 95% CI = 0.96 to 0.97) and mechanism of injury (p < 0.001). On regression analysis, race emerged as a significant predictor of organ retrieval (p < 0.001). Medically suitable patients referred from the ED were significantly more likely on bivariate analysis to have consent for donation granted compared to patients referred from inpatient settings (OR = 1.48, 95% CI = 1.03 to 2.12), but this association was not found to be significant on regression analysis (OR = 1.37, 95% CI = 0.93 to 2.02). Conclusions:, Referral of potential organ donors from the ED is associated with an increased likelihood of successful organ retrieval. The authors conclude that further attention and resources should be directed toward the role of emergency medicine (EM) in the organ procurement process, owing to the relatively high likelihood of successful organ retrieval among patients referred from the ED. [source] Delirium in Older Emergency Department Patients: Recognition, Risk Factors, and Psychomotor SubtypesACADEMIC EMERGENCY MEDICINE, Issue 3 2009Jin H. Han MD Abstract Objectives:, Missing delirium in the emergency department (ED) has been described as a medical error, yet this diagnosis is frequently unrecognized by emergency physicians (EPs). Identifying a subset of patients at high risk for delirium may improve delirium screening compliance by EPs. The authors sought to determine how often delirium is missed in the ED and how often these missed cases are detected by admitting hospital physicians at the time of admission, to identify delirium risk factors in older ED patients, and to characterize delirium by psychomotor subtypes in the ED setting. Methods:, This cross-sectional study was a convenience sample of patients conducted at a tertiary care, academic ED. English-speaking patients who were 65 years and older and present in the ED for less than 12 hours at the time of enrollment were included. Patients were excluded if they refused consent, were previously enrolled, had severe dementia, were unarousable to verbal stimuli for all delirium assessments, or had incomplete data. Delirium status was determined by using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) administered by trained research assistants (RAs). Recognition of delirium by emergency and hospital physicians was determined from the medical record, blinded to CAM-ICU status. Multivariable logistic regression was used to identify independent delirium risk factors. The Richmond Agitation and Sedation Scale was used to classify delirium by its psychomotor subtypes. Results:, Inclusion and exclusion criteria were met in 303 patients, and 25 (8.3%) presented to the ED with delirium. The vast majority (92.0%, 95% confidence interval [CI] = 74.0% to 99.0%) of delirious patients had the hypoactive psychomotor subtype. Of the 25 patients with delirium, 19 (76.0%, 95% CI = 54.9% to 90.6%) were not recognized to be delirious by the EP. Of the 16 admitted delirious patients who were undiagnosed by the EPs, 15 (93.8%, 95% CI = 69.8% to 99.8%) remained unrecognized by the hospital physician at the time of admission. Dementia, a Katz Activities of Daily Living (ADL) , 4, and hearing impairment were independently associated with presenting with delirium in the ED. Based on the multivariable model, a delirium risk score was constructed. Dementia, Katz ADL , 4, and hearing impairment were weighed equally. Patients with higher risk scores were more likely to be CAM-ICU positive (area under the receiver operating characteristic [ROC] curve = 0.82). If older ED patients with one or more delirium risk factors were screened for delirium, 165 (54.5%, 95% CI = 48.7% to 60.2%) would have required a delirium assessment at the expense of missing 1 patient with delirium, while screening 141 patients without delirium. Conclusions:, Delirium was a common occurrence in the ED, and the vast majority of delirium in the ED was of the hypoactive subtype. EPs missed delirium in 76% of the cases. Delirium that was missed in the ED was nearly always missed by hospital physicians at the time of admission. Using a delirium risk score has the potential to improve delirium screening efficiency in the ED setting. [source] Predicting potential impacts of environmental flows on weedy riparian vegetation of the Hawkesbury,Nepean River, south-eastern AustraliaAUSTRAL ECOLOGY, Issue 5 2000Jocelyn Howell Abstract Remnants of native riparian vegetation on the floodplain of the Hawkesbury,Nepean River near Sydney, have significant conservation value, but contain a large component of weeds (i.e. exotic species that have become naturalized). A proposal for the introduction of environmental flows required an assessment of potential impacts on 242 native and 128 exotic species recorded along 215 km of the river. The likely effects of frequency, season, depth and duration of inundation were considered in relation to habitat, dispersal season and tolerance to waterlogging. Overseas studies provided only limited information applicable to the study area; however, comparisons with similarly highly modified riparian habitats in New Zealand were instructive. Depth and season of inundation appear to be the variables with the greatest potential for differential effects on weeds and native plants. Because of likely spread of propagules and enhancement of growth under the present nutrient-enriched conditions, environmental flows that would cause more frequent flooding to higher levels of the riparian zone were judged to be of more benefit to weed species than native species, unless supported by bushland management including weeding. Predictions were limited by incomplete data on Hawkesbury,Nepean species, but two types of environmental flow were judged to be potentially beneficial for native water-edge plants, and worth testing and monitoring: first, flows that maintain continuous low-level flow in the river, and second, higher level environmental flows restricted to the river-edge habitat in autumn (the season in which a greater proportion of native species than weed species are known to disperse propagules). In summary, the presence of environmental weeds in riparian vegetation constrain the potential for environmental flows to improve river health. However, with ongoing monitoring, careful choice of water level and season of flow may lead to environmental flows that add to our knowledge, and benefit riparian vegetation along with other river system components. [source] Emergency Thoracic Ultrasound in the Differentiation of the Etiology of Shortness of Breath (ETUDES): Sonographic B-lines and N-terminal Pro-brain-type Natriuretic Peptide in Diagnosing Congestive Heart FailureACADEMIC EMERGENCY MEDICINE, Issue 3 2009Andrew S. Liteplo MD Abstract Objectives:, Sonographic thoracic B-lines and N-terminal pro-brain-type natriuretic peptide (NT-ProBNP) have been shown to help differentiate between congestive heart failure (CHF) and chronic obstructive pulmonary disease (COPD). The authors hypothesized that ultrasound (US) could be used to predict CHF and that it would provide additional predictive information when combined with NT-ProBNP. They also sought to determine optimal two- and eight-zone scanning protocols when different thresholds for a positive scan were used. Methods:, This was a prospective, observational study of a convenience sample of adult patients presenting to the emergency department (ED) with shortness of breath. Each patient had an eight-zone thoracic US performed by one of five sonographers, and serum NT-ProBNP levels were measured. Chart review by two physicians blinded to the US results served as the criterion standard. The operating characteristics of two- and eight-zone thoracic US alone, compared to, and combined with NT-ProBNP test results for predicting CHF were calculated using both dichotomous and interval likelihood ratios (LRs). Results:, One-hundred patients were enrolled. Six were excluded because of incomplete data. Results of 94 patients were analyzed. A positive eight-zone US, defined as at least two positive zones on each side, had a positive likelihood ratio (LR+) of 3.88 (99% confidence interval [CI] = 1.55 to 9.73) and a negative likelihood ratio (LR,) of 0.5 (95% CI = 0.30 to 0.82), while the NT-ProBNP demonstrated a LR+ of 2.3 (95% CI = 1.41 to 3.76) and LR, of 0.24 (95% CI = 0.09 to 0.66). Using interval LRs for the eight-zone US test alone, the LR for a totally positive test (all eight zones positive) was infinite and for a totally negative test (no zones positive) was 0.22 (95% CI = 0.06 to 0.80). For two-zone US, interval LRs were 4.73 (95% CI = 2.10 to 10.63) when inferior lateral zones were positive bilaterally and 0.3 (95% CI = 0.13 to 0.71) when these were negative. These changed to 8.04 (95% CI = 1.76 to 37.33) and 0.11 (95% CI = 0.02 to 0.69), respectively, when congruent with NT-ProBNP. Conclusions:, Bedside thoracic US for B-lines can be a useful test for diagnosing CHF. Predictive accuracy is greatly improved when studies are totally positive or totally negative. A two-zone protocol performs similarly to an eight-zone protocol. Thoracic US can be used alone or can provide additional predictive power to NT-ProBNP in the immediate evaluation of dyspneic patients presenting to the ED. [source] Maternal height and length of gestation: Does this impact on preterm labour in Asian women?AUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 4 2009Ben Chong-Pun CHAN Background: Both maternal height and ethnicity may influence the gestation length, but their independent effect is unclear. Aim: This study was performed to examine the relationship between maternal height and gestational length in women with singleton pregnancies in a Chinese and southeast Asian population. Methods: A retrospective cohort study was performed on women carrying singleton pregnancies with spontaneous labour in a 48-month period managed under our department to determine the relationship between maternal height, expressed in quartiles, with the mean gestational age and incidence of preterm labour. Results: Of the 16 384 women who delivered within this period, the 25th, 50th and 75th percentile values of maternal height were 153 cm, 156 cm and 160 cm respectively. Excluded from analysis were 6597 women because of multifetal pregnancy, teenage pregnancy (maternal age , 19 years old), induction of labour or elective caesarean section, or incomplete data due to no antenatal booking in our hospital. Significant differences were found in the maternal weight and body mass index, incidences of multiparity and smokers, gestational age and birthweight among the four quartiles. There was significantly increased incidence of preterm birth between 32 and 37 weeks gestation in women with shorter stature. Conclusions: In our population, maternal height has an influence on gestational length, and the lower three quartiles was associated with increased odds of labour at > 32 to < 37 weeks. This effect should be taken into consideration in the adoption of international recommendations in obstetric management and intervention. [source] A Sensitivity Analysis for Shared-Parameter Models for Incomplete Longitudinal OutcomesBIOMETRICAL JOURNAL, Issue 1 2010An Creemers Abstract All models for incomplete data either explicitly make assumptions about aspects of the distribution of the unobserved outcomes, given the observed ones, or at least implicitly imply such. One consequence is that there routinely exist a whole class of models, coinciding in their description of the observed portion of the data but differing with respect to their "predictions" of what is unobserved. Within such a class, there always is a single model corresponding to so-called random missingness, in the sense that the mechanism governing missingness depends on covariates and observed outcomes, but given these not further on unobserved outcomes. We employ these results in the context of so-called shared-parameter models where outcome and missingness models are connected by means of common latent variables or random effects, to devise a sensitivity analysis framework. Precisely, the impact of varying unverifiable assumptions about unobserved measurements on parameters of interest is studied. Apart from analytic considerations, the proposed methodology is applied to assess treatment effect in data from a clinical trial in toenail dermatophyte onychomycosis. While our focus is on longitudinal outcomes with incomplete outcome data, the ideas developed in this paper are of use whenever a shared-parameter model could be considered. [source] Analyzing Incomplete Data Subject to a Threshold using Empirical Likelihood Methods: An Application to a Pneumonia Risk Study in an ICU SettingBIOMETRICS, Issue 1 2010Jihnhee Yu Summary The initial detection of ventilator-associated pneumonia (VAP) for inpatients at an intensive care unit needs composite symptom evaluation using clinical criteria such as the clinical pulmonary infection score (CPIS). When CPIS is above a threshold value, bronchoalveolar lavage (BAL) is performed to confirm the diagnosis by counting actual bacterial pathogens. Thus, CPIS and BAL results are closely related and both are important indicators of pneumonia whereas BAL data are incomplete. To compare the pneumonia risks among treatment groups for such incomplete data, we derive a method that combines nonparametric empirical likelihood ratio techniques with classical testing for parametric models. This technique augments the study power by enabling us to use any observed data. The asymptotic property of the proposed method is investigated theoretically. Monte Carlo simulations confirm both the asymptotic results and good power properties of the proposed method. The method is applied to the actual data obtained in clinical practice settings and compares VAP risks among treatment groups. [source] Nonparametric Estimation in a Markov "Illness,Death" Process from Interval Censored Observations with Missing Intermediate Transition StatusBIOMETRICS, Issue 1 2009Halina Frydman Summary In many clinical trials patients are intermittently assessed for the transition to an intermediate state, such as occurrence of a disease-related nonfatal event, and death. Estimation of the distribution of nonfatal event free survival time, that is, the time to the first occurrence of the nonfatal event or death, is the primary focus of the data analysis. The difficulty with this estimation is that the intermittent assessment of patients results in two forms of incompleteness: the times of occurrence of nonfatal events are interval censored and, when a nonfatal event does not occur by the time of the last assessment, a patient's nonfatal event status is not known from the time of the last assessment until the end of follow-up for death. We consider both forms of incompleteness within the framework of an "illness,death" model. We develop nonparametric maximum likelihood (ML) estimation in an "illness,death" model from interval-censored observations with missing status of intermediate transition. We show that the ML estimators are self-consistent and propose an algorithm for obtaining them. This work thus provides new methodology for the analysis of incomplete data that arise from clinical trials. We apply this methodology to the data from a recently reported cancer clinical trial (Bonner et al., 2006, New England Journal of Medicine354, 567,578) and compare our estimation results with those obtained using a Food and Drug Administration recommended convention. [source] Marginal Analysis of Incomplete Longitudinal Binary Data: A Cautionary Note on LOCF ImputationBIOMETRICS, Issue 3 2004Richard J. Cook Summary In recent years there has been considerable research devoted to the development of methods for the analysis of incomplete data in longitudinal studies. Despite these advances, the methods used in practice have changed relatively little, particularly in the reporting of pharmaceutical trials. In this setting, perhaps the most widely adopted strategy for dealing with incomplete longitudinal data is imputation by the "last observation carried forward" (LOCF) approach, in which values for missing responses are imputed using observations from the most recently completed assessment. We examine the asymptotic and empirical bias, the empirical type I error rate, and the empirical coverage probability associated with estimators and tests of treatment effect based on the LOCF imputation strategy. We consider a setting involving longitudinal binary data with longitudinal analyses based on generalized estimating equations, and an analysis based simply on the response at the end of the scheduled follow-up. We find that for both of these approaches, imputation by LOCF can lead to substantial biases in estimators of treatment effects, the type I error rates of associated tests can be greatly inflated, and the coverage probability can be far from the nominal level. Alternative analyses based on all available data lead to estimators with comparatively small bias, and inverse probability weighted analyses yield consistent estimators subject to correct specification of the missing data process. We illustrate the differences between various methods of dealing with drop-outs using data from a study of smoking behavior. [source] Are Statistical Contributions to Medicine Undervalued?BIOMETRICS, Issue 1 2003Norman E. Breslow Summary. Econometricians Daniel McFadden and James Heckman won the 2000 Nobel Prize in economics for their work on discrete choice models and selection bias. Statisticians and epidemiologists have made similar contributions to medicine with their work on case-control studies, analysis of incomplete data, and causal inference. In spite of repeated nominations of such eminent figures as Bradford Hill and Richard Doll, however, the Nobel Prize in physiology and medicine has never been awarded for work in biostatistics or epidemiology. (The "exception who proves the rule" is Ronald Ross, who, in 1902, won the second medical Nobel for his discovery that the mosquito was the vector for malaria. Ross then went on to develop the mathematics of epidemic theory,which he considered his most important scientific contribution,and applied his insights to malaria control programs.) The low esteem accorded epidemiology and biostatistics in some medical circles, and increasingly among the public, correlates highly with the contradictory results from observational studies that are displayed so prominently in the lay press. In spite of its demonstrated efficacy in saving lives, the "black box" approach of risk factor epidemiology is not well respected. To correct these unfortunate perceptions, statisticians would do well to follow more closely their own teachings: conduct larger, fewer studies designed to test specific hypotheses, follow strict protocols for study design and analysis, better integrate statistical findings with those from the laboratory, and exercise greater caution in promoting apparently positive results. [source] Reparameterizing the Pattern Mixture Model for Sensitivity Analyses Under Informative DropoutBIOMETRICS, Issue 4 2000Michael J. Daniels Summary. Pattern mixture models are frequently used to analyze longitudinal data where missingness is induced by dropout. For measured responses, it is typical to model the complete data as a mixture of multivariate normal distributions, where mixing is done over the dropout distribution. Fully parameterized pattern mixture models are not identified by incomplete data; Little (1993, Journal of the American Statistical Association88, 125,134) has characterized several identifying restrictions that can be used for model fitting. We propose a reparameterization of the pattern mixture model that allows investigation of sensitivity to assumptions about nonidentified parameters in both the mean and variance, allows consideration of a wide range of nonignorable missing-data mechanisms, and has intuitive appeal for eliciting plausible missing-data mechanisms. The parameterization makes clear an advantage of pattern mixture models over parametric selection models, namely that the missing-data mechanism can be varied without affecting the marginal distribution of the observed data. To illustrate the utility of the new parameterization, we analyze data from a recent clinical trial of growth hormone for maintaining muscle strength in the elderly. Dropout occurs at a high rate and is potentially informative. We undertake a detailed sensitivity analysis to understand the impact of missing-data assumptions on the inference about the effects of growth hormone on muscle strength. [source] Complications of radiotherapy in laryngopharyngeal cancer,CANCER, Issue 19 2009Effects of a prospective smoking cessation program Abstract BACKGROUND: Radiotherapy (XRT) is effective as the primary treatment modality for laryngopharyngeal cancer; however, complications of XRT can result in significant morbidity. Few previous studies have examined the effect of continued smoking on complications of XRT. The authors of this report hypothesized that patients with laryngopharyngeal cancer who successfully quit smoking would have fewer complications of primary XRT. METHODS: All patients with head and neck cancer who were smokers at the time of diagnosis were referred prospectively to the Tobacco Treatment Program (TTP). From this group, the patients with laryngopharyngeal cancer who received XRT as the primary treatment modality were retrospectively selected and studied. RESULTS: Eighty-six patients were identified and were divided into 3 groups: Seventeen patients attended TTP and quit smoking before the start of XRT (Group 1), 33 patients attended TTP but continued to smoke during XRT (Group 2), and 37 patients refused TTP (Group 3). On the basis of a review of medical records for patients in Group 3, 20 patients quit smoking before starting XRT and were included in Group 1 (abstainers), 11 patients continued to smoke and were included in Group 2 (continued smokers), and 6 patients had incomplete data and were omitted from further analysis. Analyses both with and without Group 3 patients yielded similar results. Abstainers and continued smokers had similar demographic and clinical characteristics. With the exception of skin changes, all complications (mucositis, need for feeding tube, duration of feeding tube, need for hospitalization, pharyngeal stricture, and osteoradionecrosis) were more common in the patients who continued to smoke, although the only complications that were significantly more common were the need for hospitalization (P = .04) and osteoradionecrosis (P = .03). Patients who continued to smoke were more likely to develop osteoradionecrosis (relative risk [RR], 1.32; 95% confidence interval [CI], 1.09-1.6; P = .03) and to require hospitalization during treatment (RR, 1.46; 95% CI, 1.05-2.02; P = .04). CONCLUSIONS: Continued smoking during treatment appeared to increase the risk for complications of XRT for patients with laryngopharyngeal cancer and possibly increased hospitalizations. This hypothesis-generating study emphasized the importance of smoking cessation programs in the management of patients with head and neck cancer patients who receive XRT. Cancer 2009. © 2009 American Cancer Society. [source] Advanced Statistics: Missing Data in Clinical Research,Part 1: An Introduction and Conceptual FrameworkACADEMIC EMERGENCY MEDICINE, Issue 7 2007Jason S. Haukoos MD Missing data are commonly encountered in clinical research. Unfortunately, they are often neglected or not properly handled during analytic procedures, and this may substantially bias the results of the study, reduce study power, and lead to invalid conclusions. In this two-part series, the authors will introduce key concepts regarding missing data in clinical research, provide a conceptual framework for how to approach missing data in this setting, describe typical mechanisms and patterns of censoring of data and their relationships to specific methods of handling incomplete data, and describe in detail several simple and more complex methods of handling such data. In part 1, the authors will describe relatively simple approaches to handling missing data, including complete-case analysis, available-case analysis, and several forms of single imputation, including mean imputation, regression imputation, hot and cold deck imputation, last observation carried forward, and worst case analysis. In part 2, the authors will describe in detail multiple imputation, a more sophisticated and valid method for handling missing data. [source] |