Risk Patients (risk + patient)

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Risk Patients

  • high risk patient
  • high surgical risk patient
  • low risk patient
  • surgical risk patient


  • Selected Abstracts


    Transcoronary Ablation of Septal Hypertrophy Does Not Alter ICD Intervention Rates in High Risk Patients with Hypertrophic Obstructive Cardiomyopathy

    PACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 4 2005
    THORSTEN LAWRENZ
    Introduction: Transcoronary ablation of septal hypertrophy (TASH) is safe and effectively reduces the intraventricular gradient in patients with hypertrophic obstructive cardiomyopathy (HOCM). To analyze the potential of anti- and proarrhythmic effects of TASH, we studied the discharge rates of implanted cardioverter defibrillators (ICD) in patients with HOCM who are at a high risk for sudden cardiac death. Methods: ICD and TASH were performed in 15 patients. Indications for ICD-implantation were secondary prevention in nine patients after resuscitation from cardiac arrest with documented ventricular fibrillation (n = 7) or sustained ventricular tachycardia (n = 2) and primary prevention in 6 patients with a family history of sudden deaths, nonsustained ventricular tachycardia, and/or syncope. All the patients had severe symptoms due to HOCM (NYHA functional class = 2.9). Results: During a mean follow-up time of 41 ± 22.7 months following the TASH procedure, 4 patients had episodes of appropriate discharges (8% per year). The discharge rate in the secondary prevention group was 10% per year and 5% in the group with primary prophylactic implants. Three patients died during follow-up (one each of pulmonary embolism, stroke, and sudden death). Conclusion: In conclusion, on the basis of ICD-discharge rates in HOCM-patients at high risk for sudden death, there is no evidence for an unfavorable arrhythmogenic effect of TASH. The efficacy of ICD treatment for the prevention of sudden cardiac death in HOCM could be confirmed, however, mortality is high in this cohort of hypertrophic cardiomyopathy patients. [source]


    Randomized Study of Early Intravenous Esmolol Versus Oral Beta-Blockers in Preventing Post-CABG Atrial Fibrillation in High Risk Patients Identified by Signal-Averaged ECG: Results of a Pilot Study

    ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 2 2002
    Nomeda Balcetyte-Harris M.D.
    Background: Patients with prolonged signal-averaged ECG have four times higher risk for development of atrial fibrillation (AF) after coronary artery bypass surgery (CABG). Incidence of AF is reduced, but not eliminated by prophylaxis with beta-blockers. The limitations of prophylaxis with oral beta-blockers may be related to the delayed effect of oral therapy. We performed a pilot study of the efficacy of early intravenous esmolol and an oral beta-blocker regimen for prevention of postoperative AF. Methods: Fifty patients referred for CABG and considered to be at high risk for postoperative AF on the basis of prolonged signal-averaged ECG P wave duration > 140 ms were randomized to receive either a 24-hour infusion of esmolol 6,18 hours after CABG, at an average dose 67 ± 7 ,/kg/min, followed by oral beta-blockers versus oral beta-blockers only beginning on postoperative day 1. Results: Seven of 27 patients (26%) in the esmolol group and 6 of 23 patients (26%) in the oral beta-blocker group developed postoperative AF, P = NS. The mean time of onset of AF (2.7 ± 0.5 vs 2.7 ± 0.3 postoperative day, P = NS) and the median duration of AF (10 [2192] vs 7 [1.16] hours, P = NS) were similar between the two groups. Eleven (41%) patients treated with esmolol developed adverse events (hypotension: 8, bradycardia requiring temporary pacing: 2, left ventricular failure:1 patient) as compared to only one patient (4%) in the beta-blocker group who developed hypotension, P = 0.006. Conclusions: This randomized controlled pilot study suggests that intravenous esmolol is less well tolerated and offers no advantages to standard beta-blocker in preventing AF after CABG. A.N.E. 2002;7(2):86,91 [source]


    Continuous cervical plexus block for carotid body tumour excision in a patient with Eisenmenger's syndrome

    ANAESTHESIA, Issue 12 2006
    H. G. Jones
    Summary A patient with Eisenmenger's syndrome presented for removal of a carotid body tumour. Continuous cervical plexus blockade was successfully used to provide peri-operative and postoperative analgesia. The risks and benefits of regional and general anaesthesia in this high risk patient are discussed. [source]


    Trends in yield and effects of screening intervals during 17 years of a large UK community-based diabetic retinopathy screening programme

    DIABETIC MEDICINE, Issue 10 2009
    A. Misra
    Abstract Aims, To describe changes in risk profiles and yield in a screening programme and to investigate relationships between retinopathy prevalence, screening interval and risk factors. Methods, We analysed a population of predominantly Type 2 diabetic patients, managed in general practice, and screened between 1990 and 2006, with up to 17 years' follow-up and up to 14 screening episodes each. We investigated associations between referable or sight-threatening diabetic retinopathy (STDR), screening interval and frequency of repeated screening, whilst adjusting for age, duration and treatment of diabetes, hypertension treatment and period. Results, Of 63 622 screening episodes among 20 788 people, 16 094 (25%) identified any retinopathy, 3136 (4.9%) identified referable retinopathy and 384 (0.60%) identified STDR. The prevalence of screening-detected STDR decreased by 91%, from 1.7% in 1991,1993 to 0.16% in 2006. The prevalence of referable retinopathy increased from 2.0% in 1991,1993 to 6.7% in 1998,2001, then decreased to 4.7% in 2006. Compared with screening intervals of 12,18 months, screening intervals of 19,24 months were not associated with increased risk of referable retinopathy [adjusted odds ratio 0.93, 94% confidence interval (CI) 0.82,1.05], but screening intervals of more than 24 months were associated with increased risk (odds ratio 1.56, 95% CI 1.41,1.75). Screening intervals of < 12 months were associated with high risks of referable retinopathy and STDR. Conclusions, Over time the risk of late diagnosis of STDR decreased, possibly attributable to earlier diagnosis of less severe retinopathy, decreasing risk factors and systematic screening. Screening intervals of up to 24 months should be considered for lower risk patients. [source]


    Role of Dobutamine Stress Echocardiography for Preoperative Cardiac Risk Assessment Before Major Vascular Surgery: A Diagnostic Tool Comes of Age

    ECHOCARDIOGRAPHY, Issue 1 2000
    DON POLDERMANS M.D.
    Background: Cardiac complications are a major cause for perioperative mortality and morbidity Also, the presence and severity of underlying coronary artery disease (CAD) determine long-ten prognosis after successful surgery. Aim: This overview evaluates the additional value ofdobutamir, stress echocardiography (DSE) to common clinical cardiac risk factors and other noninvasii cardiac imaging modalities for perioperative and late cardiac prognosis. Results: DSE provides tl attending physician with preoperative prognostic information for perioperative and long-ten prognosis for cardiac events. It also enables the selection of high risk patients for evaluation i cardiac risk reduction therapies. Conclusions: DSE is a useful tool for preoperative cardiac ris evaluation in addition to common clinical cardiac risk factors. (ECHOCARDIOGRAPHY, Vo ume 17, January 2000) [source]


    Validation of a tool to safely triage selected patients with chest pain to unmonitored beds

    EMERGENCY MEDICINE AUSTRALASIA, Issue 4 2002
    Ronald V Sultana
    Abstract Objective: To externally validate a chest pain protocol that triages low risk patients with chest pain to an unmonitored bed. Methods: Retrospective study of all patients admitted from the emergency department of a tertiary referral public teaching hospital with an admission diagnosis of ,unstable angina' or suspected ischemic chest pain. Data was collected on adverse outcomes and analysed on the basis of intention-to-treat according to the chest pain protocol. Results: There were no life-threatening arrhythmias, cardiac arrests or deaths within the first 72 h of admission in the group assigned to an unmonitored bed by the chest pain protocol ([0/244]; 0.0%: 95% confidence interval 0.0,1.5%). Four patients had an uncomplicated myocardial infarction, two patients had recurrent ischemic chest pain and one patient developed acute pulmonary oedema ([7/244]; 2.9%: 95% confidence interval 1.2,5.8%). Conclusion: This retrospective study externally validated the chest pain protocol. Care in a monitored bed would not have altered outcomes for patients triaged to an unmonitored bed by the chest pain protocol. Compared to current guidelines, application of the chest pain protocol could increase the availability of monitored beds. [source]


    Treatment of intermediate- and high-grade non-Hodgkin's lymphoma using CEOP versus CNOP

    EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 3 2002
    A Hellenic Co-operative Oncology Group Study
    Abstract:Introduction: During the last few years epirubicin (E) and mitoxantrone (M) (Novantrone) have been used in the treatment of non-Hodgkin's lymphoma (NHL), because of their favorable principal profile. In particular, M has less severe non-hematological toxicity. Patients and methods: A randomized multicenter phase III study was conducted in order to compare the efficacy and toxicity of CEOP and CNOP in intermediate- and high-grade NHL. CEOP (arm A) consisted of cyclophosphamide 1000 mg m,2, vincristine 2 mg, E 70 mg m,2 on day 1 and prednisone 60 mg on days 1,7. The CNOP regimen (arm B) was identical to CEOP except for replacement of E by M at a dose of 12 mg m,2. Randomization was stratified according to stages I,IV. From September 1993 to March 1999, 249 patients registered for the trial. Patient characteristics were equally distributed in the two arms, except for age and International Prognostic Index (IPI) groups. Results: There were no significant differences between the two groups in the rates of complete (CR) and partial response (PR). The overall response rate was 78% in arm A (57% CR, 21% PR) and 82% in arm B (60% CR, 22% PR). With a median follow-up time of 47.3 months, the median survival was not reached in arm A, while it was 39.5 months in arm B (P = 0.09). Three-year survival rates were 62.5% for CEOP and 51.5% for CNOP. There was no significant difference regarding the time to progression between the two groups (29.7 vs. 18.5 months); furthermore the median duration of CRs was 71.6 and 49 months for CEOP and CNOP, respectively (P = 0.07). The therapeutic efficacies of both regimens were equivalent among the four IPI groups. More alopecia was observed in arm A. WHO grade >2 neutropenia was more frequent in arm B. Supportive treatment with G-CSF was given to 22 and 24 patients, respectively. Conclusion: There were no significant differences in terms of overall response rates, overall survival and time to progression between CEOP and CNOP in the treatment of intermediate- and high-grade NHL. Patients with low or low intermediate IPI risk treated with either CEOP or CNOP showed significantly better survival, response rates and time to progression than those with high intermediate or high IPI risk. Therefore, new improved therapeutic approaches should be developed for the treatment of high IPI risk patients. [source]


    Plasma antioxidative activity during atorvastatin and fluvastatin therapy used in coronary heart disease primary prevention

    FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 1 2004
    Jan Kowalski
    Abstract We estimated the effect of atorvastatin and fluvastatin on plasma antioxidative activity used in coronary heart disease (CHD) primary prevention. Anti-oxidative activity of blood plasma was determined by Bartosz et al. method [Curr. Top. Biophys. (1998)22:11,13], based on reduction of preformed cation radical of 2,2,azinobis(3-ethylbenzothiazoline-6-sulphonic acid) by blood plasma. The study comprised 35 patients with CHD risk who were randomly divided into two groups. The atorvastatin group comprised 17 patients who were administered the drug orally in a daily dose of 10 mg and the fluvastatin group consisted of 18 patients on an oral dose of 40 mg once daily. The control group comprised 12 healthy subjects with no drug administration. Blood samples were collected from cubital vein before and after 6-week therapy. Significantly (P < 0.05) increased , in comparison with the initial values , antioxidative activity of blood plasma was found in atorvastatin and fluvastatin groups after 6-week therapy. Moreover, the increase in antioxidative plasma activity in atorvastatin group was significantly higher in comparison with the fluvastatin group. The results of our study have demonstrated that atorvastatin and fluvastatin have an additional mechanism independent of the effect on cholesterol concentration. Thus, we presume that administration of these statins in CHD risk patients may have a beneficial effect. [source]


    Outcome prediction and risk assessment by quantitative pyrosequencing methylation analysis of the SFN gene in advanced stage, high-risk, neuroblastic tumor patients

    INTERNATIONAL JOURNAL OF CANCER, Issue 3 2010
    Barbara Banelli
    Abstract The aim of our study was to identify threshold levels of DNA methylation predictive of the outcome to better define the risk group of stage 4 neuroblastic tumor patients. Quantitative pyrosequencing analysis was applied to a training set of 50 stage 4, high risk patients and to a validation cohort of 72 consecutive patients. Stage 4 patients at lower risk and ganglioneuroma patients were included as control groups. Predictive thresholds of methylation were identified by ROC curve analysis. The prognostic end points of the study were the overall and progression-free survival at 60 months. Data were analyzed with the Cox proportional hazard model. In a multivariate model the methylation threshold identified for the SFN gene (14.3.3,) distinguished the patients presenting favorable outcome from those with progressing disease, independently from all known predictors (Training set: Overall Survival HR 8.53, p = 0.001; Validation set: HR 4.07, p = 0.008). The level of methylation in the tumors of high-risk patients surviving more than 60 months was comparable to that of tumors derived from lower risk patients and to that of benign ganglioneuroma. Methylation above the threshold level was associated with reduced SFN expression in comparison with samples below the threshold. Quantitative methylation is a promising tool to predict survival in neuroblastic tumor patients. Our results lead to the hypothesis that a subset of patients considered at high risk,but displaying low levels of methylation,could be assigned at a lower risk group. [source]


    The Glasgow Blatchford scoring system enables accurate risk stratification of patients with upper gastrointestinal haemorrhage

    INTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 7 2010
    R. Srirajaskanthan
    Summary Background:, Upper gastrointestinal (UGI) haemorrhage is a frequent cause of hospital admission. Scoring systems have been devised to identify those at risk of adverse outcomes. We evaluated the Glasgow Blatchford score's (GBS) ability to identify the need for clinical and endoscopic intervention in patients with UGI haemorrhage. Methods:, A retrospective observational study was performed in all patients who attended the A&E department with UGI haemorrhage during a 12-month period. Patients were separated into low and high risk categories. High risk encompassed patients who required blood transfusions, operative or endoscopic interventions, management on high dependency or intensive care units, and those who re-bled, represented with further bleeding, or who died. Results:, A total of 174 patients were seen with UGI bleeding. Eight of them self-discharged and were excluded. Of the remaining 166, 94 had a ,low risk' bleed, and 72 ,high risk'. The GBS was significantly higher in the high risk (median = 10) than in the low risk group (median 1, p < 0.001). To assess the validity of the GBS at separating low and high risk groups, receiver-operator characteristic (ROC) curves were plotted. The GBS had an area under ROC curve of 0.96 (95% CI 0.95,1.00). When a cut-off value of , 3 was used, sensitivity and specificity of GBS for identifying high risk bleeds was 100% and 68%. Thus at a cut-off value of , 2 the GBS is useful for distinguishing those patients with a low risk UGI bleed. Conclusions:, The GBS accurately identifies low risk patients who could be managed safely as outpatients. [source]


    Differences in prevalence of pressure ulcers between the Netherlands and Germany , associations between risk, prevention and occurrence of pressure ulcers in hospitals and nursing homes

    JOURNAL OF CLINICAL NURSING, Issue 9 2008
    Antje Tannen MA
    Aim., This study compares pressure ulcer prevalence and prevention activities in nursing homes and hospitals within two European countries. Background., Over three years stable differences have been found between the Netherlands (NL) and Germany (GER) with higher pressure ulcer rates in the NL. As previous analyses have shown, the differences cannot be entirely explained by differences in the population's vulnerability to pressure ulcers because they still remain after risk adjustment. Therefore, the differences in prevalence must be caused by other factors. The purpose of this study is to analyse if any potential differences in preventive activities can account for the varying occurrence of pressure ulcers. Method., In both countries, nation-wide surveys were conducted annually using the same standardised questionnaires. Trained nurses examined all consenting patients of the voluntarily participating facilities. This examination included a skin assessment of the entire body. Data regarding risk factors, prevention and details about wounds were then collected. Results., In-patients of 29 German (n = 2531) and 71 Dutch (n = 10 098) nursing homes and 39 German (n = 8515) and 60 Dutch (n = 10 237) hospitals were investigated. The use of pressure-reducing devices was more common in the NL than in GER, but all other interventions were more frequently provided to German risk patients than to their Dutch counterparts. The pressure ulcer prevalence was significantly higher in the Dutch sample. After adjusting for gender, age, Braden Score and prevention, the probability of having a pressure ulcer was 8·1 times higher for Dutch nursing home residents than for German residents. Conclusion., Some of the variance in pressure ulcer prevalence between the two countries can be explained by varying pressure ulcer prevention. However, some remarkable differences still remain unexplained. Relevance to clinical practice., The extent of pressure ulcer prevention, especially repositioning and nutrition intervention provided to patients at risk, is not in accordance with international guidelines. [source]


    The Rationale for and Comparisons of Different Antiplatelet Treatments in Acute Coronary Syndrome

    JOURNAL OF INTERVENTIONAL CARDIOLOGY, Issue 2008
    PAUL A. GURBEL M.D.
    Fundamentally, acute coronary syndromes are platelet-centric diseases, resulting from platelet-rich thrombi that develop at the site of vessel wall injury. In addition to aggregation, platelets modulate a plethora of other important pathophysiologic processes, including inflammation and coagulation. Therefore, a primary goal of therapy in the acute setting should be treatment with agents that provide predictable and superior platelet inhibition to prevent further ischemic events that develop from unchecked high platelet reactivity. Translational research studies of patients undergoing percutaneous revascularization have clearly demonstrated that adverse thrombotic outcomes are associated with high platelet reactivity and the latter is now emerging as a potent measurable cardiovascular risk factor. The intensity of antithrombotic therapy is influenced by patient risk. In the highest risk patients with elevated cardiac biomarkers indicative of myonecrosis, current guidelines support the use of early therapy with glycoprotein IIb/IIIa inhibition, aspirin, and clopidogrel. [source]


    Percutaneous Closure of Postmyocardial Infarction Ventricular Septal Defect

    JOURNAL OF INTERVENTIONAL CARDIOLOGY, Issue 2006
    FRANCISCO GARAY M.D.
    Postinfarction ventricular septal defect remains an important complication for myocardial infarction. It is associated with high mortality and morbidity. Despite early surgical closure attempts, mortality remains about 19,49%. Percutaneous approach, especially in high surgical risk patients is a promising alternative to traditional surgical closure, thus avoiding the deleterious effects of cardiopulmonary bypass and the ventriculotomy. The Amplatzer P.I. Ventricular Septal Defect Occluder is a device specifically designed to percutaneously close these defects in adult patients. The results reported using this device are comparable (if not better) to those for surgical closure. Here, we review the experience using this device and depict in detail the technical aspects of the procedure. [source]


    Three-dimensional MRI assessment of regional wall stress after acute myocardial infarction predicts postdischarge cardiac events

    JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 3 2008
    Fabrice Prunier MD
    Abstract Purpose To determine the prognostic significance of systolic wall stress (SWS) after reperfused acute myocardial infarction (AMI) using MRI. Materials and Methods A total of 105 patients underwent MRI 7.8 ± 4.2 days after AMI reperfusion. SWS was calculated by using a three-dimensional (3D) MRI approach to left ventricular (LV) wall thickness and to the radius of curvature. Between hospital discharge and the end of follow-up, an average of 4.1 ± 1.7 years after AMI, 19 patients experienced a major cardiac event, including cardiac death, nonfatal reinfarction or heart failure (18.3%). Results The results were mainly driven by heart failure outcome. In univariate analysis the following factors were predictive of postdischarge major adverse cardiac events: 1) at the time of AMI: higher heart rate, previous calcium antagonist treatment, in-hospital congestive heart failure, proximal left anterior descending artery (LAD) occlusion, a lower ejection fraction, higher maximal ST segment elevation before reperfusion, and ST segment reduction lower than 50% after reperfusion; 2) MRI parameters: higher LV end-systolic volume, lower ejection fraction, higher global SWS, higher SWS in the infarcted area (SWS MI) and higher SWS in the remote myocardium (SWS remote). In the final multivariate model, only SWS MI (odds ratio [OR]: 1.62; 95% confidence interval [CI]: 1.01,2.60; P = 0.046) and SWS remote (OR: 2.17; 95% CI: 1.02,4.65; P = 0.046) were independent predictors. Conclusion Regional SWS assessed by means of MRI a few days after AMI appears to be strong predictor of postdischarge cardiac events, identifying a subset of at risk patients who could qualify for more aggressive management. J. Magn. Reson. Imaging 2008. © 2008 Wiley-Liss, Inc. [source]


    Cardiovascular risk factors after liver transplantation

    LIVER TRANSPLANTATION, Issue S2 2005
    Santiago J. Muñoz
    Key Points 1Yearly screening of liver recipients with serum cholesterol, triglycerides, and lipoproteins, and assessment for risk factors for atherosclerotic cardiovascular disease, is an important component of comprehensive post transplant care. 2Revised guidelines and target levels of LDL-cholesterol levels specific for moderate and high cardiovascular risk patients have been recently revised. 3Transplant physicians should be aware of advances in the management of post transplant arterial hypertension, diabetes mellitus, obesity, and nicotine dependence. (Liver Transpl 2005;11:S52,S56.) [source]


    Prevention of lymphatic injuries in surgery

    MICROSURGERY, Issue 4 2010
    Boccardo Francesco M.D.
    Background: The problem of prevention of lymphatic injuries in surgery is extremely important if we think about the frequency of both early complications such as lymphorrhea, lymphocele, wound dehiscence, and infections and late complications such as lymphangites and lymphedema. Nowadays, it is possible to identify risk patients and prevent these lesions or treat them at an early stage. This article helps to demonstrate how it is important to integrate diagnostic and clinical findings to better understand how to properly identify risk patients for lymphatic injuries and, therefore, when it is useful and proper to do prevention. Methods: Authors report their experiences in the prevention and treatment of lymphatic injuries after surgical operations and trauma. After an accurate diagnostic approach, prevention is based on different technical procedures among which microsurgical procedures. It is very important to follow-up the patient not only clinically but also by lymphoscintigraphy. Results and Conclusions: It was identified a protocol of prevention of secondary limb lymphedema that included, from the diagnostic point of view, lymphoscintigraphy and, as concerns therapy, it also recognized a role to early microsurgery. It is necessary to accurately follow-up the patient who has undergone an operation at risk for the appearance of lymphatic complications and, even better, to assess clinically and by lymphoscintigraphy the patient before surgical operation. © 2010 Wiley-Liss, Inc. Microsurgery, 2010. [source]


    Optometric glaucoma referrals , measures of effectiveness and implications for screening strategy

    OPHTHALMIC AND PHYSIOLOGICAL OPTICS, Issue 6 2000
    Jim Gilchrist
    Summary The effectiveness of disease screening is conventionally evaluated using the epidemiological indices of sensitivity and specificity, which measure the association between screening test results and final diagnoses of all the patients screened. The effectiveness of optometric glaucoma referrals cannot be measured using such indices because diagnoses are obtained only on patients who are referred, while the true disease status of those not referred remains unknown. Instead, glaucoma referral effectiveness has been evaluated using measures of ,detection rate', the proportion of those screened who are correctly referred, and ,referral accuracy', the proportion of those referred who are correctly referred occurrence. Examination of these operational measures shows that their obtainable values and, hence, their interpretation are influenced by the total proportions of diseased and referred patients, one or both of which will generally be unavailable in evaluating samples of referrals. On the other hand, if valid estimates of these proportions can be obtained from other sources, it is possible to rescale detection rate and referral accuracy to take account of them. This rescaling produces a pair of weighted kappa coefficients, chance-corrected measures of association between referral and diagnosis, which provide a better indication of true referral effectiveness than other measures. An important consequence of this approach is that it provides a clear quantitative illustration of the need for a dual strategy to improve the overall quality of optometric glaucoma screening; widespread adoption of more comprehensive modes of screening to improve accuracy, together with a significant increase in the total numbers of patients screened to improve detection. In order for detection rates to reach desirable levels, the total number of referrals in any sub-population of patients must match or exceed the number of patients with disease. This analysis confirms quantitatively that which is intuitively obvious; not only that glaucoma awareness and uptake of screening opportunities must be encouraged in all patients over 40 years of age, but also that the older and/or more at risk patients are, the greater is their need to take advantage of glaucoma screening. [source]


    The Development of the Negative Pain Thoughts Questionnaire

    PAIN PRACTICE, Issue 5 2008
    Ana-Maria Vranceanu PhD
    ,,Abstract Background: Cognitive processes play a pivotal role in the perception of pain intensity, pain-related disability, and response to medical treatments including surgeries. While various measures of dysfunctional pain coping exist in the literature, there is no instrument available to examine such negative cognitions in relation to perceptions of medical treatment in pain patients presenting to a surgical orthopedics practice. Aims: The purpose of this article is to report on the development and preliminary testing of the Negative Pain Thoughts Questionnaire (NPTQ). Methods: The NPTQ is an 11-item questionnaire assessing cognitions about pain and its treatment in patients presenting to orthopedics surgical practices. It was administered to 2 samples of patients with hand and arm pain seeking medical treatment in a hospital surgical practice. Patients in the second sample also completed a measure of depression and one of disability of hand, arm, and shoulder. Results: The NPTQ was found to be internally consistent, and unidimensional. The NPTQ total score was found to have a moderate to high positive correlation with perceived hand, arm, and shoulder disability, and a moderate positive correlation with depression. In multivariate analyses, high scores on the NPTQ significantly predicted high perceived hand, arm, and shoulder disability, even after controlling for depression. Conclusion: This short and easily administered measure of negative pain thoughts could potentially help surgeons identify at risk patients, and facilitate referrals to cognitive behavioral therapy. This, in turn, may prevent unnecessary surgeries, may decrease healthcare costs, and prevent transition toward costly chronic pain syndrome.,, [source]


    Transfusion-dependency at presentation and its acquisition in the first year of diagnosis are both equally detrimental for survival in primary myelofibrosis,prognostic relevance is independent of IPSS or karyotype,

    AMERICAN JOURNAL OF HEMATOLOGY, Issue 1 2010
    Ayalew Tefferi
    The International Prognostic Scoring System (IPSS) and karyotype are useful tools for risk stratification in primary myelofibrosis (PMF). We examined the additional prognostic impact of red blood cell transfusion need among 254 consecutive patients (median age, 59 years). Sixty-two patients (,24%) required transfusions at diagnosis whereas 22 (,9%) became transfusion-dependent and 170 remained transfusion-independent during the first year postdiagnosis; after a median follow-up of 55 months, the respective median survivals were 35, 25, and 117 months (P < 0.01). Multivariable analysis confirmed the IPSS- and karyotype-independent prognostic weight of transfusion status. Among IPSS intermediate-1 risk patients, overall median survival of 82 months was modified to 60 or 118 months, based on presence or absence of transfusion need, respectively (P < 0.01). The corresponding figures for intermediate-2/high risk patients were 30 and 64 months (P < 0.01). Documented causes of death did not include iron overload. We conclude that transfusion status in PMF downgrades or upgrades prognosis within specific IPSS categories; transfusion need is a marker of aggressive disease biology in PMF, as it is in myelodysplastic syndromes. Am. J. Hematol., 2010. © 2009 Wiley-Liss, Inc. [source]


    Adrenal function testing in pediatric cancer survivors

    PEDIATRIC BLOOD & CANCER, Issue 7 2009
    Briana C. Patterson MD
    Abstract Background Central adrenal insufficiency is observed after cranial radiation therapy for cancer. Screening at risk patients is recommended, but the best screening strategy is unknown. Methods A retrospective review of pediatric cancer survivors who underwent hypothalamic/pituitary/adrenal axis testing was conducted. Data included: cancer diagnosis, radiotherapy dose, other endocrinopathies, and adrenal function testing. Adrenal testing included sequential low-dose corticotropin test (LDCT) and standard-dose corticotropin test (SDCT). 8 a.m. serum cortisol levels were compared to LDCT results. LDCT results were compared by radiotheroapy dose and according to the presence of endocrine comorbidities. Results Seventy-eight subjects (56% male, mean age at diagnosis 6.5 years) underwent testing. 67.9% had been treated with radiotherapy to the hypothalamus/pituitary. Mean time to diagnosis of adrenal insufficiency was 6.8 years after cancer diagnosis. Adequate adrenal function was found in 65% of patients by LDCT and 89% by SDCT. Only 21% of patients had basal serum cortisols collected at 8 a.m. Agreement between 8 a.m. baseline cortisol and LDCT was fair. Agreement between random baseline cortisol and LDCT was poor. Prevalence of central adrenal insufficiency diagnosed by LDCT increased with radiotherapy dose (8% for 10,19.9,Gy; 83% for ,40,Gy) and the number of endocrine comorbidities. Conclusions In pediatric cancer survivors, central adrenal insufficiency was common even in patients receiving <40,Gy to the hypothalamus/pituitary. We recommend use of LDCT, not 8 a.m. serum cortisol to screen patients who received >30,Gy of radiotherapy and those with other central endocrinopathies. Pediatr Blood Cancer 2009; 53:1302,1307. © 2009 Wiley-Liss, Inc. [source]


    Application of a propensity score to adjust for channelling bias with NSAIDs,

    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 6 2004
    S. V. Morant
    Abstract Purpose To compare the relative risks of upper GI haemorrhage (UGIH) in users of Newer versus Older, non-specific NSAIDs when adjusted for channelling bias by regression on individual covariates, a propensity score and both. Methods Cohort study of patients prescribed NSAIDs between June 1987 and January 2000. Exposure to Newer and Older non-specific NSAIDs was identified, and risk factors evaluated for each patient. Results of multiple covariate analyses and the propensity scoring technique to assess potential channelling bias in comparisons between Newer and Older non-specific NSAIDs were compared. Results This study included 7.1 thousand patient years (tpy) exposure to meloxicam, 1.6,tpy exposure to coxibs, and 628,tpy exposure to Older non-specific NSAIDs. Patients receiving Newer NSAIDs were older, more likely to have a history of GI symptoms, and at higher risk for GI complications. Adjusting for these risk factors reduced the relative risks of UGIH on meloxicam and coxibs versus Older non-specific NSAIDs to 0.84 (95%CI 0.60, 1.17) and 0.36 (0.14, 0.97) respectively. Conclusions Channelling towards high GI risk patients occurred in the prescribing of Newer NSAIDs. Propensity scores highlighted the markedly different risk profiles of users of Newer and Older non-specific NSAID. Correcting for channelling bias, coxib exposure, but not meloxicam exposure, was associated with less UGIH than Older non-specific NSAID exposure. In the present study, corrections made by regression on a propensity score and on individual covariates were similar. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Diagnosis of pulmonary aspiration: A mouse model using a starch granule test in bronchoalveolar lavage

    RESPIROLOGY, Issue 4 2008
    Leonardo A. PINTO
    Background and objective: Pulmonary aspiration (PA) is a significant respiratory disease in children. However, the diagnosis of aspiration is often difficult owing to the poor efficacy of currently available diagnostic tests. The aim of this study was to assess in a mouse model the specificity of starch granule detection in BAL as a new method for detecting PA in children. Methods: Twenty BALB/c mice were divided into the following groups according to the solution instilled into the airways: corn flour milk 7.5%,a source of starch (CM), Pseudomonas aeruginosa, normal saline and a control group. BAL was performed 2 days after instillation. Detection of starch granules and lipid-laden macrophages were compared in BAL. Results: Starch granules were detected in BAL fluids from all mice in the CM group (food aspiration model), whereas no starch granules were detected in the other three groups, demonstrating a sensitivity and specificity of 100%. On the other hand, lipid-laden macrophages were found in all mice from all the groups studied. Conclusions: The detection of starch granules in BAL is a simple and highly specific method for the diagnosis of PA in an experimental model. Clinical studies using the starch granule detection method in BAL should be tested in at risk patients to evaluate the utility of this method for investigating PA. [source]


    Improvement in Long-Term Renal Graft Survival due to CMV Prophylaxis with Oral Ganciclovir: Results of a Randomized Clinical Trial

    AMERICAN JOURNAL OF TRANSPLANTATION, Issue 5 2008
    V. Kliem
    Oral ganciclovir prophylaxis and intravenous preemptive therapy are competitive approaches to prevent cytomegalovirus (CMV) disease after renal transplantation. This trial compared efficacy, safety and long-term graft outcome in 148 renal graft recipients randomized to ganciclovir prophylaxis (N = 74) or preemptive therapy (N = 74). Hierarchical testing revealed (i) patients with CMV infection had more severe periods of impaired graft function (creatinine clearancemax-min 25.0 ± 14.2 mL/min vs. 18.1 ± 12.5 mL/min for patients without CMV infection; p = 0.02),(ii) prophylaxis reduced CMV infection by 65% (13 vs. 33 patients; p < 0.0001) but (iii) creatinine clearance at 12 months was comparable for both regimes (54.0 ± 24.9 vs. 53.1 ± 23.7 mL/min; p = 0.92). No major safety issues were observed, and patient survival at 12 months was similar in both groups (5 deaths [6.8%] vs. 4 [5.4%], p = 1.0000). Prophylaxis significantly increased long-term graft survival 4 years posttransplant (92.2% vs. 78.3%; p = 0.0425) with a number needed to treat of 7.19. Patients with donor +/recipient + CMV serostatus had the lowest rate of graft loss following prophylaxis (0.0% vs. 26.8%; p = 0.0035). In conclusion, it appears that routine oral prophylaxis may improve long-term graft survival for most renal transplant patients. Preemptive therapy can be considered in low risk patients in combination with adequate CMV monitoring. [source]


    Posttransplant Prophylactic Intravenous Immunoglobulin in Kidney Transplant Patients at High Immunological Risk: A Pilot Study

    AMERICAN JOURNAL OF TRANSPLANTATION, Issue 5 2007
    D. Anglicheau
    The effects of posttransplant prophylactic intravenous immunoglobulin (IVIg) were investigated in renal transplant recipients at high immunological risk. Thirty-eight deceased-donor kidney transplant recipients with previous positive complement-dependent cytotoxicity crossmatch (n = 30), and/or donor-specific anti-HLA antibodies (n = 14) were recruited. IVIg (2 g/kg) was administrated on days 0, 21, 42 and 63 with quadruple immunosuppression. Biopsy-proven acute cellular and humoral rejection rates at month 12 were 18% and 10%, respectively. Glomerulitis was observed in 31% and 60% of patients at months 3 and 12, respectively, while allograft glomerulopathy rose from 3% at month 3 to 28% at 12 months. Interstitial fibrosis/tubular atrophy increased from 18% at day 0 to 51% and 72% at months 3 and 12 (p < 0.0001). GFR was 50 ± 17 mL/min/1.73 m2 and 48 ± 17 mL/min/1.73 m2 at 3 and 12 months. PRA decreased significantly after IVIg (class I: from 18 ± 27% to 5 ± 12%, p < 0.01; class II: from 25 ± 30% to 7 ± 16%, p < 0.001). Patient and graft survival were 97% and 95%, respectively and no graft was lost due to rejection (mean follow-up 25 months). In conclusion, prophylactic IVIg in high-immunological risk patients is associated with good one-year outcomes, with adequate GFR and a profound decrease in PRA level, but a significant increase in allograft nephropathy. [source]


    Regional anaesthesia and pain management

    ANAESTHESIA, Issue 2010
    I. Power
    Summary Despite recent advances in analgesia delivery techniques and the availability of new analgesic agents with favourable pharmacokinetic profiles, current evidence suggests that postoperative pain continues to be inadequately managed, with the proportion of patients reporting severe or extreme postoperative pain having changed little over the past decade. Regional techniques are superior to systemic opioid agents with regards to analgesia profile and adverse effects in the context of general, thoracic, gynaecological, orthopaedic and laparoscopic surgery. Outcome studies demonstrate that regional analgesic techniques also reduce multisystem co-morbidity and mortality following major surgery in high risk patients. This review will discuss the efficacy of regional anaesthetic techniques for acute postoperative analgesia, the impact of regional block techniques on physiological outcomes, and the implications of acute peri-operative regional anaesthesia on chronic (persistent) postoperative pain. [source]


    Vagal Afferent Stimulation as a Cardioprotective Strategy?

    ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 4 2005
    Introducing the Concept
    The effect of vagal afferent signaling on cardioinhibition has been well known for over 130 years. Both experimental and clinical studies have demonstrated not only the potential adverse effect of unrestrained sympathoexcitation in high risk patients with ischemic heart disease but the potential for cardioprotection by programmed vagal activity. The vasodepressor and negative chronotropic effects of efferent vagal stimulation has been a cause for concern. However it is becoming clear that favorable shifts towards increased cardiac vagal modulation can be achieved by vagal afferent nerve stimulation. This phasic effect appears to operate though central medullary pathways. Thus by engaging vagal afferent fibers in humans there is the possibility that one can exploit the benefits of central cardioinhibition without adversely affecting heart rate, respiration or hemodynamics. This commentary explores the background and rationale for considering vagal afferent stimulation as a plausible cardioprotective strategy. [source]


    Results of percutaneous transhepatic cholecystostomy for high surgical risk patients with acute cholecystitis

    ANZ JOURNAL OF SURGERY, Issue 4 2010
    Kenneth S. H. Chok
    Abstract Aim:, To assess the efficacy and safety of percutaneous transhepatic cholecystostomy (PTC) in treatment for acute cholecystitis in high surgical risk patients. Patients and methods:, A retrospective review was carried out from January 1999 to June 2007 on 23 patients, 11 males and 12 females, who underwent PTC for the management of acute cholecystitis at the Department of Surgery, Queen Mary Hospital, Hong Kong, China. The mean age of the patients was 83. They all had either clinical or radiological evidence of acute cholecystitis and had significant pre-morbid diseases. The median follow-up period on them was 35 months. Results:, All the PTCs performed were technically successful. One patient died from procedure-related haemoperitoneum, while 87% (n= 20) of all the patients had clinical resolution of sepsis by 20 h after PTC. Eight patients underwent elective cholecystectomy afterwards (62.5% with the laparoscopic approach). Eight patients had dislodgement of the PTC catheter and one of them developed recurrent acute cholecystitis 3 months after PTC. That patient was treated conservatively. Four patients died from their pre-morbid conditions during the follow-up period. Conclusion:, PTC was a safe and effective alternative for treating acute cholecystitis in this group of patients. Thirteen of them without elective cholecystectomy performed did not have recurrent acute cholecystitis after a single session of PTC. It may be considered as a definitive treatment for this group of patients. [source]


    TRAM flap delay: an extraperitoneal laparoscopic technique

    ANZ JOURNAL OF SURGERY, Issue 10 2005
    Ardalan Ebrahimi
    Although the transverse rectus abdominis musculocutaneous (TRAM) flap is the gold standard in autogenous breast reconstruction, it is less reliable in patients at high risk of ischaemic compromise. A preliminary delay procedure involving ligation of the deep inferior epigastric vessels has been shown to augment flap vascularity and improve outcome in those high risk patients undergoing unipedicled TRAM flap reconstruction. Despite previous description of a transperitoneal laparoscopic technique, surgical delay generally continues to be performed as an open procedure. This may reflect apprehension over the transperitoneal approach with its attendant risk of injury to intra-abdominal organs and vessels as well as adhesion formation. In this paper we describe an extraperitoneal laparoscopic technique for TRAM flap delay. Access to the deep inferior epigastric vessels is obtained using an extraperitoneal approach similar to that used for total extraperitoneal laparoscopic inguinal hernia repair and the vessels are easily identified and ligated using a single working port. While further study is required to evaluate the safety and efficacy of this technique, we report this as an alternative to the known open procedure which may be particularly useful for bilateral TRAM flap delay with the potential for reduced operative time, postoperative pain and scarring by avoiding bilateral inguinal incisions. [source]


    Initial experience of abdominal aortic aneurysm repairs in Borneo

    ANZ JOURNAL OF SURGERY, Issue 10 2003
    Ming Kon Yii
    Background: Abdominal aortic aneurysms (AAA) repairs are routineoperations with low mortality in the developed world. There arefew studies on the operative management of AAA in the Asian population. This study reports the initial results from a unit with no previousexperience in this surgery by a single surgeon on completion oftraining. Methods: All patients with AAA repair from a prospective databasebetween 1996 and 1999 in the south-east Asian state of Sarawak inBorneo Island were analyzed. Three groups were identified on presentationaccording to clinical urgency of surgery. Elective surgery was offeredto all good risk patients with AAA of , 5 cm. All symptomatic patients were offered surgery unless contraindicatedmedically. Results: AAA repairs were performed in 69 patients: 32 (46%)had elective repairs of asymptomatic AAA; 20 (29%) hadurgent surgery for symptomatic non-ruptured AAA; and 17 (25%)had surgery for ruptured AAA. The mortality rate for elective surgery was6%; the two deaths occurred early in the series with thesubsequent 25 repairs recorded no further mortality. The mortalityrates for the urgent, symptomatic non-ruptured AAA repair and rupturedAAA repair were 20% and 35%, respectively. Cardiacand res­piratory complications were the main morbidities. Sixty-three patients seen during this period had no surgery; threepresented and died of ruptured AAA, 34 had AAA of , 5 cmin diameter, and 26 with AAA of , 5 cmdiameter had either no consent for surgery or serious medical contraindications. Conclusion: This study showed that AAA can be repaired safely byhighly motivated and adequately trained surgeons in a hospital withlittle previous experience. [source]


    Clinical utility of magnetic resonance imaging and the preoperative identification of low risk endometrial cancer

    AUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 5 2004
    Giovanni LOSCO
    Abstract Background:, Magnetic resonance imaging (MRI) is reported to offer the best imaging of local disease in endometrial cancer. We audited MRI scans to identify their clinical utility, particularly in the preoperative identification of ,low risk' endometrial cancer (grade one or two endometrioid tumours confined to the inner half of the myometrium). Aim:, To correlate histological and MRI findings and to establish our ability to preoperatively identify women with ,low risk' tumours. Study design:, A retrospective audit of MRI scans in women with a new diagnosis of endometrial cancer from July 1998 to November 2002. Radiology and pathology reports and surgical staging data were extracted. Independently a team of radiologists reviewed MRI films and the findings were compared to pathology. Results:, Thirty-nine patients were included. Only 10% of original reports contained all the clinically relevant information. On review, the sensitivity for the detection of myometrial invasion was 90%, specificity 71%, positive predictive value (PPV) 93% and negative predictive value (NPV) 63%. For the detection of deep invasion, sensitivity was 56%, specificity 77%, PPV 64% and NPV 71%. All women with grade one or two tumours having no invasion or grade one having superficial invasion detected on MRI had pathological ,low risk' disease. Conclusions:, Magnetic resonance imaging scans as reported offered limited clinical benefit. Attention needs to be given to MRI sequencing and reporting protocols. If the review results can be confirmed by prospective studies, MRI offers significant clinical utility in the identification of low risk patients and their surgical treatment planning. [source]