Many Cases (many + case)

Distribution by Scientific Domains
Distribution within Medical Sciences


Selected Abstracts


Detection of Childhood Visual Impairment in At-Risk Groups

JOURNAL OF POLICY AND PRACTICE IN INTELLECTUAL DISABILITIES, Issue 3 2007
Heleen Evenhuis
Abstract, Children with intellectual disabilities have an increased risk of visual impairment, caused by both ocular and cerebral abnormalities, but this risk has not been quantified. The same applies to preterm children and children with cerebral palsy with a normal intelligence. Many cases probably go unidentified, because participation of these children in preschool vision screening programs is not guaranteed, or because no screening program is available. Although there may be a case for specific screening, there is insufficient scientific evidence supporting such a claim. A "safety net" construction for vision screening is proposed by a Dutch expert working party, based on scientific information and joint professional expertise, to improve identification of both ocular and cerebral visual impairment in at-risk groups. Costs and gains of the model should be scientifically evaluated in a test region. [source]


Inactivation of Salmonella enterica serovar enteritidis in shell eggs by sequential application of heat and ozone

LETTERS IN APPLIED MICROBIOLOGY, Issue 6 2008
J.J. Perry
Abstract Aims:, To assess the contribution of ozone to lethality of Salmonella enterica serovar Enteritidis in experimentally inoculated whole shell eggs that are sequentially treated with heat and gaseous ozone in pilot-scale equipment. Methods and Results:, Whole shell eggs were inoculated with small populations of Salmonella Enteritidis (8·5 × 104,2·4 × 105 CFU per egg) near the egg vitelline membrane. Eggs were subjected to immersion heating (57°C for 21 min), ozone treatment (vacuum at 67·5 kPa, followed by ozonation at a maximum concentration of approx. 140 g ozone m,3 and 184,198 kPa for 40 min) or a combination of both treatments. Survivors were detected after an enrichment process or enumerated using modified most probable number technique. Ozone, heat and combination treatments inactivated 0·11, 3·1 and 4·2 log Salmonella Enteritidis per egg, respectively. Conclusions:, Sequential application of heat and gaseous ozone was significantly more effective than either heat or ozone alone. The demonstrated synergy between these treatment steps should produce safer shell eggs than the heat treatment alone. Significance and Impact of the Study:, Shell eggs are the most common vehicle for human infection by Salmonella Enteritidis. Many cases of egg-related salmonellosis are reported annually despite efforts to reduce contamination, including thermal pasteurization of shell eggs and egg products. Treatment with ozone-based combination should produce shell eggs safer than those treated with heat alone. [source]


Suicide patterns and characteristics in Akita, Japan

PSYCHIATRY AND CLINICAL NEUROSCIENCES, Issue 3 2005
MASAHITO FUSHIMI md
Abstract, Akita Prefecture currently has the highest rate of suicide in Japan. Given this alarming statistic, investigation of the underlying causes of suicide and identification of strategies for suicide prevention are imperative. Members of the Akita Prefectural Medical Association (APMA) see most of the individuals who commit suicides in Akita Prefecture, so data from the APMA would prove advantageous in any investigation of suicides. In this study, members of the APMA who had attended to individuals who had committed suicide were asked to complete a questionnaire about the case to determine the factors underlying suicide in Akita Prefecture. From 1 July 2001 to 30 June 2002, a total of 138 cases (102 males, 36 females) of suicide were reported. Most suicide cases were of 50,69 year olds. Many cases involved relatively lethal methods (such as hanging). Most suicides were performed at home and at a time when the rest of the family was asleep or absent. The most ,common ,complaint ,appeared ,to ,be ,economic-related ,problems. ,Depressive ,disorder ,was the most common psychiatric disorder, and many cases displayed high depressive trait scores. The present results do not exclude the possibility that economic-related problems are playing a major role in recent increases in suicide numbers. However, strategies for dealing with depression as well as economic-related problems are considered important. [source]


Case ascertainment and estimated incidence of drug-induced long-QT syndrome: study in Southwest France

BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 3 2008
Mariam Molokhia
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT , Drug-induced long-QT syndrome (LQTS) is a potentially fatal condition that has led to a number of postmarketing withdrawals in recent years. , However, many cases may not survive long enough to reach hospital, and only a small proportion are reported to pharmacovigilance agencies. , The extent to which genetic determinants of susceptibility to LQTS are specific to particular drugs, or common to several classes of drug, remains to be determined. WHAT THIS STUDY ADDS , We estimated population prevalence of drug-induced LQTS in the Midi-Pyrenees region, southwest France, using five different institutions and assessed feasibility of tracing potential cases (in addition to pharmacovigilance data), using hospital data and rigorous case definition. , These methods can be adapted to a wider region, used to augment pharmacovigilance reporting, and offer researchers the opportunity to study genetic susceptibility to drug-induced LQTS. AIMS The aim of this study was to investigate the incidence and reporting rate of drug-induced long-QT syndrome (LQTS) in France [defined by evidence of torsades de pointes (TdP), QT prolongation and exposure to a relevant drug] and to assess feasibility of case collection for drug-induced LQTS. METHODS A retrospective population-based study was carried out in Southwest France in five institutions: three main hospitals, one private clinic and one cardiac emergency unit, searched from 1 January 1999 to 1 January 2005 (population coverage of 614 000). The study population consisted of 861 cases with International Classification of Diseases-10 diagnostic codes for ventricular tachycardia (I147.2), ventricular fibrillation (I149.0) and sudden cardiac death (I146.1) from hospital discharge summaries, supplemented by cases reported to national or regional pharmacovigilance systems, and voluntary reporting by physicians, validated according to internationally defined criteria for drug-induced LQTS. RESULTS Of 861 patients coded with arrhythmias or sudden cardiac death, there were 40 confirmed surviving acquired cases of drug-induced LQTS. We estimated that the incidence of those who survive to reach hospital drug-induced LQTS is approximately 10.9 per million annually in France (95% confidence interval 7.8, 14.8). CONCLUSIONS Many cases of drug-induced LQTS may not survive before they reach hospital, as the reporting rate for drug-induced LQTS identified through the cardiology records and also reported to pharmacovigilance systems for the Midi-Pyrenees area is 3/40 (7.5%). Using the methods outlined it is possible to assemble cases to study genetic susceptibility to drug-induced LQTS and adapt these methods more widely. [source]


FlattGen: Teaching tool for surface flattening

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2006
Simon Kolmani
Abstract In many cases in the industry, we can face a problem, where an object has to be manufactured out of thin plane material. This is especially the case in the car, airplane, shipbuilding, textile, and shoe making industry. In order to manufacture such an object, a pattern has to be generated first. It has to be cut out from plane material and then bend to the final shape. The same problem can be found also in computer graphics, where flat patterns are used to decrease distortions in texture mapping. Therefore, it is important for designers and computer engineers to master the flat pattern generation. In literature, a great number of methods for pattern generation can be found and it is important to know their advantages and weaknesses. In this article, the application FlattGen is presented where the most important flattening methods can be seen and compared to each other. In this way, students can experiment and prepare themselves better for the future work. © 2006 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 106,119, 2006; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20060 [source]


An Optimizing Compiler for Automatic Shader Bounding

COMPUTER GRAPHICS FORUM, Issue 4 2010
Petrik Clarberg
Abstract Programmable shading provides artistic control over materials and geometry, but the black box nature of shaders makes some rendering optimizations difficult to apply. In many cases, it is desirable to compute bounds of shaders in order to speed up rendering. A bounding shader can be automatically derived from the original shader by a compiler using interval analysis, but creating optimized interval arithmetic code is non-trivial. A key insight in this paper is that shaders contain metadata that can be automatically extracted by the compiler using data flow analysis. We present a number of domain-specific optimizations that make the generated code faster, while computing the same bounds as before. This enables a wider use and opens up possibilities for more efficient rendering. Our results show that on average 42,44% of the shader instructions can be eliminated for a common use case: single-sided bounding shaders used in lightcuts and importance sampling. [source]


Wrinkling Coarse Meshes on the GPU

COMPUTER GRAPHICS FORUM, Issue 3 2006
J. Loviscach
The simulation of complex layers of folds of cloth can be handled through algorithms which take the physical dynamics into account. In many cases, however, it is sufficient to generate wrinkles on a piece of garment which mostly appears spread out. This paper presents a corresponding fully GPU-based, easy-to-control, and robust method to generate and render plausible and detailed folds. This simulation is generated from an animated mesh. A relaxation step ensures that the behavior remains globally consistent. The resulting wrinkle field controls the lighting and distorts the texture in a way which closely simulates an actually deformed surface. No highly tessellated mesh is required to compute the position of the folds or to render them. Furthermore, the solution provides a 3D paint interface through which the user may bias the computation in such a way that folds already appear in the rest pose. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation, I.3.7 [Computer Graphics]: Color, shading, shadowing, and texture [source]


Priority-Driven Acoustic Modeling for Virtual Environments

COMPUTER GRAPHICS FORUM, Issue 3 2000
Patrick Min
Geometric acoustic modeling systems spatialize sounds according to reverberation paths from a sound source to a receiver to give an auditory impression of a virtual 3D environment. These systems are useful for concert hall design, teleconferencing, training and simulation, and interactive virtual environments. In many cases, such as in an interactive walkthrough program, the reverberation paths must be updated within strict timing constraints - e.g., as the sound receiver moves under interactive control by a user. In this paper, we describe a geometric acoustic modeling algorithm that uses a priority queue to trace polyhedral beams representing reverberation paths in best-first order up to some termination criteria (e.g., expired time-slice). The advantage of this algorithm is that it is more likely to find the highest priority reverberation paths within a fixed time-slice, avoiding many geometric computations for lower-priority beams. Yet, there is overhead in computing priorities and managing the priority queue. The focus of this paper is to study the trade-offs of the priority-driven beam tracing algorithm with different priority functions. During experiments computing reverberation paths between a source and a receiver in a 3D building environment, we find that priority functions incorporating more accurate estimates of source-to-receiver path length are more likely to find early reverberation paths useful for spatialization, especially in situations where the source and receiver cannot reach each other through trivial reverberation paths. However, when receivers are added to the environment such that it becomes more densely and evenly populated, this advantage diminishes. [source]


The embedded ion method: A new approach to the electrostatic description of crystal lattice effects in chemical shielding calculations

CONCEPTS IN MAGNETIC RESONANCE, Issue 5 2006
Dirk Stueber
Abstract The nuclear magnetic shielding anisotropy of NMR active nuclei is highly sensitive to the nuclear electronic environment. Hence, measurements of the nuclear magnetic shielding anisotropy represent a powerful tool in the elucidation of molecular structure for a wide variety of materials. Quantum mechanical ab initio nuclear magnetic shielding calculations effectively complement the experimental NMR data by revealing additional structural information. The accuracy and capacity of these calculations has been improved considerably in recent years. However, the inherent problem of the limitation in the size of the systems that may be studied due to the relatively demanding computational requirements largely remains. Accordingly, ab initio shielding calculations have been performed predominantly on isolated molecules, neglecting the molecular environment. This approach is sufficient for neutral nonpolar systems, but leads to serious errors in the shielding calculations on polar and ionic systems. Conducting ab initio shielding calculations on clusters of molecules (i.e., including the nearest neighbor interactions) has improved the accuracy of the calculations in many cases. Other methods of simulating crystal lattice effects in shielding calculations that have been developed include the electrostatic representation of the crystal lattice using point charge arrays, full ab initio methods, ab initio methods under periodic boundary conditions, and hybrid ab initio/molecular dynamics methods. The embedded ion method (EIM) discussed here follows the electrostatic approach. The method mimics the intermolecular and interionic interactions experienced by a subject molecule or cluster in a given crystal in quantum mechanical shielding calculations with a large finite, periodic, and self-consistent array of point charges. The point charge arrays in the EIM are generated using the Ewald summation method and embed the molecule or ion of interest for which the ab initio shielding calculations are performed. The accuracy with which the EIM reproduces experimental nuclear magnetic shift tensor principal values, the sensitivity of the EIM to the parameters defining the point charge arrays, as well as the strengths and limitations of the EIM in comparison with other methods that include crystal lattice effects in chemical shielding calculations, are presented. © 2006 Wiley Periodicals, Inc. Concepts Magn Reson Part A 28A: 347,368, 2006 [source]


Motional smearing of electrically recovered couplings measured from multipulse transients

CONCEPTS IN MAGNETIC RESONANCE, Issue 3 2001
Scott A. Riley
Abstract The measurement of residual dipolar and quadrupolar coupling constants in the liquid phase by using an electric field to destroy the isotropic nature of molecular tumbling is complicated by charge-induced turbulent motion. In many cases this motion is due to charge injection at electrode surfaces, an effect that leads to an apparent removal of electrically recovered anisotropic spectral splittings when measured from a spin-echo envelope modulation produced by a train of radio frequency (rf) pulses. To understand this averaging, the effect of quadrupolar couplings and enhanced molecular diffusion on free-induction, spin-echo, and Carr,Purcell signals is analytically determined in the special case of homogeneous rf pulses. Additional signal damping due to rf inhomogeneity and coupling constant heterogeneity is determined by numerically extending the kernel formalism introduced by Herzog and Hahn to understand spin diffusion in solids. Finally, the merit of the numerical approach is tested by comparison with analytical results for homogeneous rf pulses and experimental results for perdeuterated nitrobenzene involving inhomogeneous rf pulses and coupling heterogeneity. © 2001 John Wiley & Sons, Inc. Concepts Magn Reson 13: 171,189, 2001 [source]


An evidence-based iterative content trust algorithm for the credibility of online news

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2009
Guosun Zeng
Abstract People encounter more information than they can possibly use every day. But all information is not necessarily of equal value. In many cases, certain information appears to be better, or more trustworthy, than other information. And the challenge that most people then face is to judge which information is more credible. In this paper we propose a new problem called Corroboration Trust, which studies how to find credible news events by seeking more than one source to verify information on a given topic. We design an evidence-based corroboration trust algorithm called TrustNewsFinder, which utilizes the relationships between news articles and related evidence information (person, location, time and keywords about the news). A news article is trustworthy if it provides many pieces of trustworthy evidence, and a piece of evidence is likely to be true if it is provided by many trustworthy news articles. Our experiments show that TrustNewsFinder successfully finds true events among conflicting information and identifies trustworthy news better than the popular search engines. Copyright © 2009 John Wiley & Sons, Ltd. [source]


The Neutralizer: a self-configurable failure detector for minimizing distributed storage maintenance cost

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2009
Zhi Yang
Abstract To achieve high data availability or reliability in an efficient manner, distributed storage systems must detect whether an observed node failure is permanent or transient, and if necessary, generate replicas to restore the desired level of replication. Given the unpredictability of network dynamics, however, distinguishing permanent and transient failures is extremely difficult. Though timeout-based detectors can be used to avoid mistaking transient failures as permanent failures, it is unknown how the timeout values should be selected to achieve a better tradeoff between detection latency and accuracy. In this paper, we address this fundamental tradeoff from several perspectives. First, we explore the impact of different timeout values on maintenance cost by examining the probability of their false positives and false negatives. Second, we propose a self-configurable failure detector called the Neutralizer based on the idea of counteracting false positives with false negatives. The Neutralizer could enable the system to maintain a desired replication level on average with the least amount of bandwidth. We conduct extensive simulations using real trace data from a widely deployed peer-to-peer system and synthetic traces based on PlanetLab and Microsoft PCs, showing a significant reduction in aggregate bandwidth usage after applying the Neutralizer (especially in an environment with a low average node availability). Overall, we demonstrate that the Neutralizer closely approximates the performance of a perfect ,oracle' detector in many cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Formaldehyde-releasers: relationship to formaldehyde contact allergy.

CONTACT DERMATITIS, Issue 1 2010
Part 2.
This is the second part of a review article on formaldehyde-releasers used as durable press chemical finishes (DPCF) in textiles. The early finishes contained large amounts of free formaldehyde, which led to many cases of allergic contact dermatitis to clothes in the 1950s and 1960s. Currently, most finishes are based on modified dimethylol dihydroxyethyleneurea, which releases less formaldehyde. Nevertheless, recent studies in the United States and Israel have identified patients reacting to DPCF, considered to have allergic contact reactions to clothes, either from formaldehyde released by the DPCF therein or from the DPCF per se (in patients negative to formaldehyde). However, all studies had some weaknesses in design or interpretation and in not a single case has the clinical relevance been proven. The amount of free formaldehyde in most garments will likely be below the threshold for the elicitation of dermatitis for all but the most sensitive patients. The amount of free cyclized urea DPCF in clothes is unlikely to be high enough to cause sensitization. Patch test reactions to formaldehyde-releasing DPCF will in most cases represent a reaction to formaldehyde released from the test material. [source]


Twenty-five years quaternium-15 in the European baseline series: does it deserve its place there?

CONTACT DERMATITIS, Issue 4 2010
Anton C. De Groot
For allergens to be included in the European baseline series, they should have allergy rates of at least 1%. In several studies quaternium-15 had lower scores. Also, many cases of sensitization are already detected by formaldehyde contact allergy. The aim of this study was to evaluate whether quaternium-15 deserves continued inclusion in the baseline series on the basis of current criteria: 1% positive reactions, common occurrence in the environment, many relevant reactions. We used the literature survey method in this study. Only the United Kingdom has rates consistently over 1%. The mean for all other countries together and for many individual nations is lower than 1%. At least half of the reactions are already detected by formaldehyde sensitivity, which lowers rates for allergy to quaternium-15 per se (i.e. not caused or at least detected by formaldehyde sensitivity) to less than 0.6% for all countries except the United Kingdom. Neither common occurrence in the environment nor a high percentage of relevant reactions has been ascertained. It may well be argued that quaternium-15 does not deserve its place in the European baseline series and could be incorporated in a cosmetic screening series or preservative series instead. In the United Kingdom, routine testing should be continued. [source]


Skin sensitizing properties of the ethanolamines mono-, di-, and triethanolamine.

CONTACT DERMATITIS, Issue 5 2009
Data analysis of a multicentre surveillance network (IVDK, review of the literature
Numerous publications address the skin sensitizing potential of the short chain alkanolamines triethanolamine (TEA), diethanolamine (DEA), monoethanolamine (MEA), which are not skin sensitizing according to animal studies. Regarding TEA, we analysed patch test data of 85 098 patients who had been tested with TEA 2.5% petrolatum by Information Network of Departments of Dermatology (IVDK) to identify particular exposures possibly associated with an elevated risk of sensitization. Altogether, 323 patients (0.4%) tested positive. The profile of patch test reactions indicates a slightly irritant potential rather than a true allergic response in many cases. Although used widely, no exposure associated with an increased risk of TEA sensitization was identified. Therefore, the risk of sensitization to TEA seems to be very low. MEA and DEA were patch tested in a much more aimed fashion in 9602 and 8791 patients, respectively when prevalence of contact allergy was 3.8% and 1.8%. MEA is the prominent allergen in metalworkers with exposure to water-based metalworking fluids (wbMWFs); DEA is probably used in cutting fluids less frequently nowadays. Chronic damage to the skin barrier resulting from wbMWF, the alkalinity of ethanolamines (increasing from TEA to MEA), and other cofactors may contribute to a notable sensitization risk. [source]


P30 Transparent plastic foils allow a short patch-test application time

CONTACT DERMATITIS, Issue 3 2004
Bolli Bjarnason
Objective:, To investigate whether application of allergic patch tests with transparent semi-occlusive adhesive plastic foils yields higher test sensitivity than when tapes are used. To study whether such foils compared to tapes allow a shorter application time of tests. Methods:, We applied different doses of budesonide printed on polyester squares and vehicle control squares to budesonide allergic subjects for 4 days. Each subject was tested with a set of tests both with a tape and a foil. We assessed all tests when they had been detached and additionally those applied with foils at earlier time points. All assessments were performed both visually and with a laser Doppler perfusion imaging technique. Results:, Test sensitivity is higher with foil applications than when tapes are used and the perfusion is higher with the foils in many cases. The foils allow detachment of visually positive tests before 48 hours in some subjects, regardless of dose. Conclusions:, Test applications with transparent semi-occlusive adhesive plastic foils is sensitive and should be considered for application of patch tests when a short application time is important as when tests are carried out with occupationally hazardous allergens or when test substances containing allergens are expected to be irritating. [source]


The ,II isotype of tubulin is present in the cell nuclei of a variety of cancers

CYTOSKELETON, Issue 2 2004
I-Tien Yeh
Abstract Tubulin, the subunit protein of microtubules, has generally been thought to be exclusively a cytoplasmic protein in higher eukaryotes. We have previously shown that cultured rat kidney mesangial cells contain the ,II isotype of tubulin in their nuclei in the form of an ,,II dimer [Walss et al., 1999: Cell Motil. Cytoskeleton 42:274,284, 1999]. More recently, we examined a variety of cancerous and non-cancerous cell lines and found ,II in the nuclei of all of the former and only a few of the latter (Walss-Bass et al., 2002: Cell Tissue Res. 308:215,223]. In order to determine if ,II -tubulin occurs in the nuclei of actual cancers as well as in cancer cell lines, we used the immunoperoxidase method to look for nuclear ,II in a variety of tumors excised from 201 patients. We found that 75% of these tumors contain ,II in their nuclei. Distribution of nuclear ,II was highly dependent on the type of cancer, with 100% of the colon and prostate cancers, but only 19% of the skin tumors, having nuclear ,II. Nuclear ,II was particularly marked in tumors of epithelial origin, of which 83% showed nuclear ,II, in contrast to 54% in tumors of non-epithelial origin. In many cases, ,II staining occurred very strongly in the nuclei and not in the cytoplasm; in other cases, ,II was present in both. In many cases, particularly metastases, otherwise normal cells adjacent to the tumor also showed nuclear ,II, suggesting that cancer cells may influence nearby cells to synthesize ,II and localize it to their nuclei. Our results have implications for the diagnosis, biology, and chemotherapy of cancer. Cell Motil. Cytoskeleton 57:96,106, 2004. © 2004 Wiley-Liss, Inc. [source]


Surgical extrusion of a crown-root fractured immature permanent incisor: 36 month follow-up

DENTAL TRAUMATOLOGY, Issue 6 2007
Zuhal K
Abstract,,, Crown-root fracture is defined as a fracture involving enamel, dentin and pulp and can be classified as either complicated or uncomplicated. The tooth with crown-root fracture presents a lot of problems during coronal restorations and extraction was formerly used in many cases. But loss of a permanent incisor in a young patient may create severe emotional problems and alternative treatment approaches must be considered. This report presents the successful results of a surgical extrusion of a complicated crown-root fractured, immature permanent incisor in a 9-year-old boy. Examination 36 months after the trauma indicated that the treatment had provided functional and esthetic results. [source]


Complementary and integrative medical therapies, the FDA, and the NIH: definitions and regulation

DERMATOLOGIC THERAPY, Issue 2 2003
Michael H. Cohen
ABSTRACT: ,,The National Center for Complementary and Alternative Medicine (NCCAM) presently defines complementary and alternative medicine (CAM) as covering "a broad range of healing philosophies (schools of thought), approaches, and therapies that mainstream Western (conventional) medicine does not commonly use, accept, study, understand, or make available. The research landscape, including NCCAM-funded research, is continually changing and subject to vigorous methodologic and interpretive debates. Part of the impetus for greater research dollars in this arena has been increasing consumer reliance on CAM to dramatically expand. State (not federal) law controls much of CAM practice. However, a significant federal role exists in the regulation of dietary supplements. The U.S. Food and Drug Administration (FDA) regulates foods, drugs, and cosmetics in interstate commerce. No new "drug" may be introduced into interstate commerce unless proven "safe" and "effective" for its intended use, as determined by FDA regulations. "Foods", however, are subject to different regulatory requirements, and need not go through trials proving safety and efficacy. The growing phenomenon of consumer use of vitamins, minerals, herbs, and other "dietary supplements" challenged the historical divide between drugs and foods. The federal Dietary Supplements Health Education Act (DSHEA) allows manufacturers to distribute dietary supplements without having to prove safety and efficacy, so long as the manufacturers make no claims linking the supplements to a specific disease. State law regulates the use of CAM therapies through a variety of legal rules. Of these, several major areas of concern for clinicians are professional licensure, scope of practice, and malpractice. Regarding licensure, each state has enacted medical licensing that prohibits the unlicensed practice of medicine and thereby criminalizes activity by unlicensed CAM providers who offer health care services to patients. Malpractice is defined as unskillful practice which fails to conform to a standard of care in the profession and results in injury. The definition is no different in CAM than in general medicine; its application to CAM, however, raises novel questions. Courts rely on medical consensus regarding the appropriateness of a given therapy. A framework for assessing potential liability risk involves assessing the medical evidence concerning safety and efficacy, and then aligning clinical decisions with liability concerns. Ultimately research will or will not establish a specific CAM therapy as an important part of the standard of care for the condition in question. Legal rules governing CAM providers and practices are, in many cases, new and evolving. Further, laws vary by state and their application depends on the specific clinical scenario in question. New research is constantly emerging, as are federal and state legislative developments and judicial opinions resulting from litigation. [source]


Binding characteristics of chondroitin sulfate proteoglycans and laminin-1, and correlative neurite outgrowth behaviors in a standard tissue culture choice assay

DEVELOPMENTAL NEUROBIOLOGY, Issue 4 2002
Diane M. Snow
Abstract Neuronal growth cones are capable of sophisticated discrimination of environmental cues, on cell surfaces and in the extracellular matrix, to accomplish navigation during development (generation) and following nervous system injury (regeneration). Choices made by growth cones are commonly examined using tissue culture paradigms in which molecules of interest are purified and substratum-bound. From observations of growth cone behaviors using these paradigms, assertions are made about choices neuronal growth cones may make in vivo. However, in many cases, the binding, interactions, and conformations of these molecules have not been determined. In the present study, we investigated the binding characteristics of two commonly studied outgrowth regulatory molecules: chondroitin sulfate proteoglycans (CSPGs), which are typically inhibitory to neurite outgrowth during development and following nervous system injury, and laminin, which is typically outgrowth promoting for many neuronal types. Using a novel combination of radiolabeling and quantitative fluorescence, we determined the precise concentrations of CSPGs and laminin-1 that were bound separately and together in a variety of choice assays. For identically prepared cultures, we correlated neurite outgrowth behaviors with binding characteristics. The data support our working hypothesis that neuronal growth cones are guided by the ratio of outgrowth-promoting to outgrowth-inhibiting influences in their environment, i.e., they summate local molecular cues. The response of growth cones to these molecular combinations is most likely mediated by integrins and subsequent activation of signal transduction cascades in growth cones. © 2002 Wiley Periodicals, Inc. J Neurobiol 51: 285,301, 2002 [source]


How many cases of Type 2 diabetes mellitus are due to being overweight in middle age?

DIABETIC MEDICINE, Issue 1 2007
Evidence from the Midspan prospective cohort studies using mention of diabetes mellitus on hospital discharge or death records
Abstract Aims To relate body mass index (BMI) in middle age to development of diabetes mellitus. Methods Participants were 6927 men and 8227 women from the Renfrew/Paisley general population study and 3993 men from the Collaborative occupational study. They were aged 45,64 years and did not have reported diabetes mellitus. Cases who developed diabetes mellitus, identified from acute hospital discharge data and from death certificates in the period from screening in 1970,1976 to 31 March 2004, were related to BMI at screening. Results Of Renfrew/Paisley study men 5.4%, 4.8% of women and 5% of Collaborative study men developed diabetes mellitus. Odds ratios for diabetes mellitus were higher in the overweight group (BMI 25 to < 30 kg/m2) than in the normal weight group (BMI 18.5 to < 25 kg/m2) and highest in the obese group (BMI , 30 kg/m2). Compared with the normal weight group, age-adjusted odds ratios for overweight and obese Renfrew/Paisley men were 2.73 [95% confidence interval (CI) 2.05, 3.64] and 7.26 (95% CI 5.26, 10.04), respectively. Further subdividing the normal, overweight and obese groups showed increasing odds ratios with increasing BMI, even at the higher normal level. Assuming a causal relation, around 60% of cases of diabetes could have been prevented if everyone had been of normal weight. Conclusions Overweight and obesity account for a major proportion of diabetes mellitus, as identified from hospital discharge and death records. With recent increases in the prevalence of overweight, the burden of disease related to diabetes mellitus is likely to increase markedly. Primordial prevention of obesity would be a major strategy for reducing the incidence of diabetes mellitus in populations. [source]


Gastric emptying in diabetes: clinical significance and treatment

DIABETIC MEDICINE, Issue 3 2002
M. Horowitz
Abstract The outcome of recent studies has led to redefinition of concepts relating to the prevalence, pathogenesis and clinical significance of disordered gastric emptying in patients with diabetes mellitus. The use of scintigraphic techniques has established that gastric emptying is abnormally slow in approx. 30,50% of outpatients with long-standing Type 1 or Type 2 diabetes, although the magnitude of this delay is modest in many cases. Upper gastrointestinal symptoms occur frequently and affect quality of life adversely in patients with diabetes, although the relationship between symptoms and the rate of gastric emptying is weak. Acute changes in blood glucose concentration affect both gastric motor function and upper gastrointestinal symptoms. Gastric emptying is slower during hyperglycaemia when compared with euglycaemia and accelerated during hypoglycaemia. The blood glucose concentration may influence the response to prokinetic drugs. Conversely, the rate of gastric emptying is a major determinant of post-prandial glycaemic excursions in healthy subjects, as well as in Type 1 and Type 2 patients. A number of therapies currently in development are designed to improve post-prandial glycaemic control by modulating the rate of delivery of nutrients to the small intestine. [source]


FNA diagnosis of teratoma lung: A case report

DIAGNOSTIC CYTOPATHOLOGY, Issue 10 2010
Farhan Asif Siddiqui M.D.
Abstract A case of teratoma occurring in the lung of a 27-year-old female, diagnosed by fine-needle aspiration cytology and confirmed by histopathology, is being presented here. Occurrence of teratoma at this site is a rare entity. The authors take this opportunity to report such a rare case, and as to the best of our knowledge, not many cases have been reported in literature till date. Diagn. Cytopathol. 2010;38:758,760. © 2010 Wiley-Liss, Inc. [source]


For a Bayesian Account of Indirect Confirmation

DIALECTICA, Issue 2 2002
Luca Moretti
Laudan and Leplin have argued that empirically equivalent theories can elude underdetermination by resorting to indirect confirmation. Moreover, they have provided a qualitative account of indirect confirmation that Okasha has shown to be incoherent. In this paper, I develop Kukla's recent contention that indirect confirmation is grounded in the probability calculus. I provide a Bayesian rule to calculate the probability of a hypothesis given indirect evidence. I also suggest that the application of the rule presupposes the methodological relevance of non-empirical virtues of theories. If this is true, Laudan and Leplin's strategy will not work in many cases. Moreover, without an independent way of justifying the role of non-empirical virtues in methodology, the scientific realists cannot use indirect evidence to defeat underdetermination. [source]


INTRADUCTAL ULTRASONOGRAPHY OF THE GALLBLADDER IN APPLICATION OF THE ENDOSCOPIC NASO-GALLBLADDER DRAINAGE

DIGESTIVE ENDOSCOPY, Issue 1 2007
Daisuke Masuda
Background:, Although endoscopic naso-gallbladder drainage (ENGBD) for gallbladder disease is useful, the procedure is difficult and investigations involving many cases are lacking. Furthermore, reports on transpapillary intraductal ultrasonography (IDUS) of the gallbladder using a miniature probe are rare. Methods:, A total of 150 patients (119 suspected of having gallbladder carcinoma, 24 with acute cholecystitis (AC), and seven with Mirizzi's syndrome (MS)) were the subject. (i) ENGBD: We attempted to put ENGBD tube into the GB. (ii) IDUS of the gallbladder: Using the previous ENGBD tube, we attempted to insert the miniature probe into the gallbladder and perform transpapillary IDUS of the gallbladder. In five patients, we attempted three-dimensional intraductal ultrasonography (3D-IDUS). Results:, (i) ENGBD: Overall success rate was 74.7% (112/150); the rate for the patients suspected of having gallbladder carcinoma was 75.6% (90/119), and was 71.0% (22/31) for the AC and MS patients. Inflammation and jaundice improved in 20/22 successful patients with AC and MS. Success rate was higher when cystic duct branching was from the lower and middle parts of the common bile duct than from the upper part, and was higher when branching was upwards than downwards. (ii) IDUS of the gallbladder: Success rate for miniature probe insertion into the gallbladder was 96.4% (54/56). Lesions could be visualized in 50/54 patients (92.6%). Of these, detailed evaluation of the locus could be performed in 41. In five patients attempted 3D-IDUS, the relationship between the lesion and its location was readily grasped. Conclusion:, IDUS of the gallbladder is superior for diagnosing minute images. Improvement on the device will further increase its usefulness. [source]


Pre-operative screening for excessive alcohol consumption among patients scheduled for elective surgery

DRUG AND ALCOHOL REVIEW, Issue 2 2007
SWATI SHOURIE
Abstract Pre-operative intervention for excessive alcohol consumption among patients scheduled for elective surgery has been shown to reduce complications of surgery. However, successful intervention depends upon an effective and practical screening procedure. This study examines current screening practices for excessive alcohol consumption amongst patients scheduled for elective surgery in general hospitals. It also examines the appropriateness of potential sites and staff for pre-operative screening. Forms used routinely to assess alcohol consumption in the pre-admission clinics (PAC) of eight Sydney hospitals were examined. In addition, the appropriateness of six staff categories (surgeons, surgeons' secretaries, junior medical officer, anaesthetists, nurses and a research assistant) and of two sites (surgeons' office and PAC) in conducting additional screening was assessed at two hospitals. Outcomes included observed advantages and disadvantages of sites and personnel, and number of cases with excessive drinking identified. There was duplication in information collected routinely on alcohol use in the PACs in eight Sydney Hospitals. Questions on alcohol consumption in patient self-completion forms were not validated. The PAC provided for efficient screening but time to surgery was typically too short for successful intervention in many cases. A validated tool and efficient screening procedure is required to detect excessive drinking before elective surgery. Patients often present to the PAC too close to the time of surgery for any change in drinking to reverse alcohol's effects. The role of the referring general practitioner and of printed advice from the surgeon in preparing patients for surgery needs further investigation. [source]


On the evaluation of seismic response of structures by nonlinear static methods

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 13 2009
Melina Bosco
Abstract In the most recent seismic codes, the assessment of the seismic response of structures may be carried out by comparing the displacement capacity, provided by nonlinear static analysis, with the displacement demand. In many cases the code approach is based on the N2 method proposed by Fajfar, which evaluates the displacement demand by defining, as an intermediate step, a single degree-of-freedom (SDOF) system equivalent to the examined structure. Other codes suggest simpler approaches, which do not require equivalent SDOF systems, but they give slightly different estimation of the seismic displacement demand. The paper points out the differences between the methods and suggests an operative approach that provides the same accuracy as the N2 method without requiring the evaluation of an equivalent SDOF system. A wide parametric investigation allows an accurate comparison of the different methods and demonstrates the effectiveness of the proposed operative approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Controlled overturning of unanchored rigid bodies

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2006
Rubén Boroschek
Abstract Typical small hospital and laboratory equipment and general supplies cannot be anchored to resist earthquake motions. In order to protect these non-structural components, a common procedure is to provide barriers to restrain overturning of objects on shelves and other furniture. In many cases this option is not available, especially for hospital equipment, because of other functional requirements. This work presents an alternative approach. The method proposed here does not avoid overturning, but controls the direction of overturning by providing an inclination to the support base so that the overturning occurs in a preferential direction towards a safe area. For example, objects on shelves, could overturn towards the inside or a wall, and equipment on tables could overturn away from the edge. In both cases this would not only reduce the damage to the particular items, but reduce the amount of debris on the floor. In order to determine the proper inclination of the base, specific rigid bodies are analytically evaluated for bi-directional excitation obtained from 314 earthquake records, in approximately 7500 cases. For each case, several inclination angles are evaluated. Finally, a parametric curve is adjusted to the data, given a relation between angle of inclination and percentage of controlled overturning cases. In all cases a 7° angle gives more than 98% confidence of controlled overturning. The design expressions were later compared with experimental results obtained on a six-degree-of-freedom shake table; confirming the analytical expressions. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Indirect effects of algae on coral: algae-mediated, microbe-induced coral mortality

ECOLOGY LETTERS, Issue 7 2006
Jennifer E. Smith
Abstract Declines in coral cover are generally associated with increases in the abundance of fleshy algae. In many cases, it remains unclear whether algae are responsible, directly or indirectly, for coral death or whether they simply settle on dead coral surfaces. Here, we show that algae can indirectly cause coral mortality by enhancing microbial activity via the release of dissolved compounds. When coral and algae were placed in chambers together but separated by a 0.02 ,m filter, corals suffered 100% mortality. With the addition of the broad-spectrum antibiotic ampicillin, mortality was completely prevented. Physiological measurements showed complementary patterns of increasing coral stress with proximity to algae. Our results suggest that as human impacts increase and algae become more abundant on reefs a positive feedback loop may be created whereby compounds released by algae enhance microbial activity on live coral surfaces causing mortality of corals and further algal growth. [source]


Effects of species diversity on disease risk

ECOLOGY LETTERS, Issue 4 2006
F. Keesing
Abstract The transmission of infectious diseases is an inherently ecological process involving interactions among at least two, and often many, species. Not surprisingly, then, the species diversity of ecological communities can potentially affect the prevalence of infectious diseases. Although a number of studies have now identified effects of diversity on disease prevalence, the mechanisms underlying these effects remain unclear in many cases. Starting with simple epidemiological models, we describe a suite of mechanisms through which diversity could increase or decrease disease risk, and illustrate the potential applicability of these mechanisms for both vector-borne and non-vector-borne diseases, and for both specialist and generalist pathogens. We review examples of how these mechanisms may operate in specific disease systems. Because the effects of diversity on multi-host disease systems have been the subject of much recent research and controversy, we describe several recent efforts to delineate under what general conditions host diversity should increase or decrease disease prevalence, and illustrate these with examples. Both models and literature reviews suggest that high host diversity is more likely to decrease than increase disease risk. Reduced disease risk with increasing host diversity is especially likely when pathogen transmission is frequency-dependent, and when pathogen transmission is greater within species than between species, particularly when the most competent hosts are also relatively abundant and widespread. We conclude by identifying focal areas for future research, including (1) describing patterns of change in disease risk with changing diversity; (2) identifying the mechanisms responsible for observed changes in risk; (3) clarifying additional mechanisms in a wider range of epidemiological models; and (4) experimentally manipulating disease systems to assess the impact of proposed mechanisms. [source]