Time Consuming (time + consuming)

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Time Consuming

  • very time consuming


  • Selected Abstracts


    Adding Depth to Cartoons Using Sparse Depth (In)equalities

    COMPUTER GRAPHICS FORUM, Issue 2 2010
    D. Sýkora
    Abstract This paper presents a novel interactive approach for adding depth information into hand-drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop-ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline including applications such as enhancement, registration, composition, 3D modelling and stereoscopic display. [source]


    Semi-Automatic 3D Reconstruction of Urban Areas Using Epipolar Geometry and Template Matching

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2006
    José Miguel Sales Dias
    The main challenge is to compute the relevant information,building's height and volume, roof's description, and texture,algorithmically, because it is very time consuming and thus expensive to produce it manually for large urban areas. The algorithm requires some initial calibration input and is able to compute the above-mentioned building characteristics from the stereo pair and the availability of the 2D CAD and the digital elevation model of the same area, with no knowledge of the camera pose or its intrinsic parameters. To achieve this, we have used epipolar geometry, homography computation, automatic feature extraction and we have solved the feature correspondence problem in the stereo pair, by using template matching. [source]


    Dynamic Wavelet Neural Network for Nonlinear Identification of Highrise Buildings

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005
    Xiaomo Jiang
    Compared with conventional neural networks, training of a dynamic neural network for system identification of large-scale structures is substantially more complicated and time consuming because both input and output of the network are not single valued but involve thousands of time steps. In this article, an adaptive Levenberg,Marquardt least-squares algorithm with a backtracking inexact linear search scheme is presented for training of the dynamic fuzzy WNN model. The approach avoids the second-order differentiation required in the Gauss,Newton algorithm and overcomes the numerical instabilities encountered in the steepest descent algorithm with improved learning convergence rate and high computational efficiency. The model is applied to two highrise moment-resisting building structures, taking into account their geometric nonlinearities. Validation results demonstrate that the new methodology provides an efficient and accurate tool for nonlinear system identification of high-rising buildings. [source]


    Parallel bandwidth characteristics calculations for thin avalanche photodiodes on a SGI Origin 2000 supercomputer

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2004
    Yi Pan
    Abstract An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today's long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p - i - n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Performance comparison of checkpoint and recovery protocols

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2003
    Himadri Sekhar Paul
    Abstract Checkpoint and rollback recovery is a well-known technique for providing fault tolerance to long-running distributed applications. Performance of a checkpoint and recovery protocol depends on the characteristics of the application and the system on which it runs. However, given an application and system environment, there is no easy way to identify which checkpoint and recovery protocol will be most suitable for it. Conventional approaches require implementing the application with all the protocols under consideration, running them on the desired system, and comparing their performances. This process can be very tedious and time consuming. This paper first presents the design and implementation of a simulation environment, distributed process simulation or dPSIM, which enables easy implementation and evaluation of checkpoint and recovery protocols. The tool enables the protocols to be simulated under a wide variety of application, system, and network characteristics. The paper then presents performance evaluation of five checkpoint and recovery protocols. These protocols are implemented and executed in dPSIM under different simulated application, system, and network characteristics. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Novel application of flow cytometry: Determination of muscle fiber types and protein levels in whole murine skeletal muscles and heart

    CYTOSKELETON, Issue 12 2007
    Connie Jackaman
    Abstract Conventional methods for measuring proteins within muscle samples such as immunohistochemistry and western blot analysis can be time consuming, labor intensive and subject to sampling errors. We have developed flow cytometry techniques to detect proteins in whole murine heart and skeletal muscle. Flow cytometry and immunohistochemistry were performed on quadriceps and soleus muscles from male C57BL/6J, BALB/c, CBA and mdx mice. Proteins including actins, myosins, tropomyosin and ,-actinin were detected via single staining flow cytometric analysis. This correlated with immunohistochemistry using the same antibodies. Muscle fiber types could be determined by dual labeled flow cytometry for skeletal muscle actin and different myosins. This showed similar results to immunohistochemistry for I, IIA and IIB myosins. Flow cytometry of heart samples from C57BL/6J and BALB/c mice dual labeled with cardiac and skeletal muscle actin antibodies demonstrated the known increase in skeletal actin protein in BALB/c hearts. The membrane-associated proteins ,-sarcoglycan and dystrophin could be detected in C57BL/6J mice, but were decreased or absent in mdx mice. With the ability to label whole muscle samples simultaneously with multiple antibodies, flow cytometry may have advantages over conventional methods for certain applications, including assessing the efficacy of potential therapies for muscle diseases. Cell Motil. Cytoskeleton 2007. © 2007 Wiley-Liss, Inc. [source]


    Distribution of Aggregate Utility Using Stochastic Elements of Additive Multiattribute Utility Models

    DECISION SCIENCES, Issue 2 2000
    Herbert Moskowitz
    ABSTRACT Conventionally, elements of a multiattribute utility model characterizing a decision maker's preferences, such as attribute weights and attribute utilities, are treated as deterministic, which may be unrealistic because assessment of such elements can be imprecise and erroneous, or differ among a group of individuals. Moreover, attempting to make precise assessments can be time consuming and cognitively demanding. We propose to treat such elements as stochastic variables to account for inconsistency and imprecision in such assessments. Under these assumptions, we develop procedures for computing the probability distribution of aggregate utility for an additive multiattribute utility function (MAUF), based on the Edgeworth expansion. When the distributions of aggregate utility for all alternatives in a decision problem are known, stochastic dominance can then be invoked to filter inferior alternatives. We show that, under certain mild conditions, the aggregate utility distribution approaches normality as the number of attributes increases. Thus, only a few terms from the Edgeworth expansion with a standard normal density as the base function will be sufficient for approximating an aggregate utility distribution in practice. Moreover, the more symmetric the attribute utility distributions, the fewer the attributes to achieve normality. The Edgeworth expansion thus can provide a basis for a computationally viable approach for representing an aggregate utility distribution with imprecisely specified attribute weights and utilities assessments (or differing weights and utilities across individuals). Practical guidelines for using the Edgeworth approximation are given. The proposed methodology is illustrated using a vendor selection problem. [source]


    Diagnostic utility of the Quick Inventory of Depressive Symptomatology (QIDS-C16 and QIDS-SR16) in the elderly

    ACTA PSYCHIATRICA SCANDINAVICA, Issue 3 2010
    P. M. Doraiswamy
    Doraiswamy PM, Bernstein IH, Rush AJ, Kyutoku Y, Carmody TJ, Macleod L, Venkatraman S, Burks M, Stegman D, Witte B, Trivedi MH. Diagnostic utility of the Quick Inventory of Depressive Symptomatology (QIDS-C16 and QIDS-SR16) in the elderly. Objective:, To evaluate psychometric properties and comparability ability of the Montgomery-Ĺsberg Depression Rating Scale (MADRS) vs. the Quick Inventory of Depressive Symptomatology,Clinician-rated (QIDS-C16) and Self-report (QIDS-SR16) scales to detect a current major depressive episode in the elderly. Method:, Community and clinic subjects (age ,60 years) were administered the Mini-International Neuropsychiatric Interview (MINI) for DSM-IV and three depression scales randomly. Statistics included classical test and Samejima item response theories, factor analyzes, and receiver operating characteristic methods. Results:, In 229 elderly patients (mean age = 73 years, 39% male, 54% current depression), all three scales were unidimensional and with nearly equal Cronbach , reliability (0.85,0.89). Each scale discriminated persons with major depression from the non-depressed, but the QIDS-C16 was slightly more accurate. Conclusion:, All three tests are valid for detecting geriatric major depression with the QIDS-C16 being slightly better. Self-rated QIDS-SR16 is recommended as a screening tool as it is least expensive and least time consuming. [source]


    Follicular Unit Transplantation: The Option of Beard Construction in Eunuchoid Men

    DERMATOLOGIC SURGERY, Issue 9 2002
    Kayihan, ahinoglu MD
    background. Psychosocial problems are very common in eunuchoids and may be related to the impact of underlying disorders on the physical appearance which makes them unable to overcome the sense of inferiority of childhood. A beardless patient treated with follicular unit transplantation (FUT) is reported here. objective. Such patients desire to get rid of a boyish appearance and want to achieve a masculine appearance. One of the easiest methods to achieve this goal is FUT. methods. By using an 18-gauge needle, the recipient bed was prepared under local anesthesia after premedication, and 1200 one- or two-hair micrografts were transplanted to the perioral (goatee) and its extensions to the sideburns. results. After completion of the procedure to the planned area, we achieved restoration of a masculine appearance which made the patient seem quite satisfied. conclusion. The process of beard reconstruction is time consuming and tedious, but highly effective. [source]


    Right Ventricular Function Assessment: Comparison of Geometric and Visual Method to Short-Axis Slice Summation Method

    ECHOCARDIOGRAPHY, Issue 10 2007
    Daniel Drake M.D.
    Background: Short-axis summation (SAS) method applied for right ventricular (RV) volumes and right ventricular ejection fraction (RVEF) measurement with cardiac MRI is time consuming and cumbersome to use. A simplified RVEF measurement is desirable. We compare two such methods, a simplified ellipsoid geometric method (GM) and visual estimate, to the SAS method to determine their accuracy and reproducibility. Methods: Forty patients undergoing cine cardiac MRI scan were enrolled. The images acquired were analyzed by the SAS method, the GM (area and length measurement from two orthogonal planes) and visual estimate. RVEF was calculated using all three methods and RV volumes using the SAS and GM. Bland,Altman analysis was applied to test the agreement between the various measurements. Results: Mean RVEF was 49 ± 12% measured by SAS method, 54 ± 12% by the GM, and 49 ± 11% by visual estimate. There were similar bias and limits of agreement between the visual estimate and the GM compared to SAS. The interobserver variability showed a bias close to zero with limits of agreement within ±10% absolute increments of RVEF in 35 of the patients. The RV end-diastolic volume by GM showed wider limits of agreement. The RV end-systolic volume by GM was underestimated by around 10 ml compared to SAS. Conclusion: Both the visual estimate and the GM had similar bias and limits of agreement when compared to SAS. Though the end-systolic measurement is somewhat underestimated, the geometric method may be useful for serial volume measurements. [source]


    Visual Quantitative Estimation: Semiquantitative Wall Motion Scoring and Determination of Ejection Fraction

    ECHOCARDIOGRAPHY, Issue 5 2003
    M.D., Steven J. Lavine
    Ejection fraction (EF) is the most commonly used parameter of left ventricular (LV) systolic function and can be assessed by echocardiography. Quantitative echocardiography is time consuming and is as accurate as visual estimation, which has significant variability. We hypothesized that each echocardiographer has developed a mental set of guidelines that relate to how much individual segment shortening constitutes normal function or hypokinesis of varying extents. We determined the accuracy of applying these guidelines to an accepted technique of EF determination using a retrospective analysis of consecutive two-dimensional echocardiographic studies performed on patients who had radioventriculography (RVG) within 48 hours. Using a 12 segment model, we scored each segment at the base and mid-ventricular level based on segmental excursion and thickening. The apex was scored similarly but with 1/3 of the value based on a cylinder-cone model. EF was determined from the sum of segment scores and was estimated visually. We termed this approach visual quantitative estimation (VQE). We correlated the EF derived from VQE and visual estimation with RVG EF. In the training set, VQE demonstrated a strong correlation with RVG(r = 0.969), which was significantly greater than visual estimation(r = 0.896, P < 0.01). The limits of agreement for VQE (+12% to ,7%) were similar to the limits of RVG agreement with contrast ventriculography (+10% to ,11%) with similar intraobserver and interobserver variabilities. Similar correlation was noted in the prediction set between VQE and RVG EF(r = 0.967, P < 0.001). We conclude that VQE provides highly correlated estimates of EF with RVG. (ECHOCARDIOGRAPHY, Volume 20, July 2003) [source]


    The Impact of the Demand for Clinical Productivity on Student Teaching in Academic Emergency Departments

    ACADEMIC EMERGENCY MEDICINE, Issue 12 2004
    Todd J. Berger MD
    Objective: Because many emergency medicine (EM) attending physicians believe the time demands of clinical productivity limit their ability to effectively teach medical students in the emergency department (ED), the purpose of this study was to determine if there is an inverse relationship between clinical productivity and teaching evaluations. Methods: The authors conducted a prospective, observational, double-blind study. They asked senior medical students enrolled in their EM clerkship to evaluate each EM attending physician who precepted them at three academic EDs. After each shift, students anonymously evaluated 10 characteristics of clinical teaching by their supervising attending physician. Each attending physician's clinical productivity was measured by calculating their total relative value units per hour (RVUs/hr) during the nine-month study interval. The authors compared the total RVUs/hr for each attending physician to the medians of their teaching evaluation scores at each ED using a Spearman rank correlation test. Results: Seventy of 92 students returned surveys, evaluating 580 shifts taught by 53 EM attending physicians. Each attending physician received an average of 11 evaluations (median score, 5 of 6) and generated a mean of 5.68 RVUs/hr during the study period. The correlation between evaluation median scores and RVUs/hr was ,0.08 (p = 0.44). Conclusions: The authors found no statistically significant relationship between clinical productivity and teaching evaluations. While many EM attending physicians perceive patient care responsibilities to be too time consuming to allow them to be good teachers, the authors found that a subset of our more productive attending physicians are also highly rated teachers. Determining what characteristics distinguish faculty who are both clinically productive and highly rated teachers should help drive objectives for faculty development programs. [source]


    Correlation between routine radiographic findings and early racing career in French Trotters

    EQUINE VETERINARY JOURNAL, Issue S36 2006
    C. ROBERT
    Summary Reasons for performing study: The relationship between the presence of radiological abnormalities and subsequent racing performance is controversial. However, as training is expensive and time consuming, it would save time and money to identify subjects with osteo-articular lesions not compatible with a normal racing career on the basis of routine radiographic screenings at yearling age. Objectives: To evaluate the impact of osteo-articular lesions on racing ability in French Trotters and identify radiographic changes associated with failure in ,qualification', in order to provide objective criteria for selection of horses based on their osteo-articular status. Hypothesis: The influence of radiographic findings (RF) on racing ability depends on their nature, location, clinical relevance and number. Methods: The limbs of 202 French Trotters were radiographed just before they started training. All the RF were graded according to a standardised protocol depending on their severity. The success in ,qualification' (first race in career of French Trotters) was the criteria used to assess racing ability. Breeders and trainers were questioned about the causes for horses not racing. Results: Overall 113 (55.9%) horses qualified. Osteoarticular lesions were directly responsible for nonqualification in 31% of the horses. Subjects with more than one abnormal RF, with abnormal RF on the fore-, hind-fetlock or proximal tarsus were less likely to qualify. Dorsal modelling in the front fetlock and osteochondrosis of the lateral trochlear ridge of the femur also significantly reduced the qualification rate. Conclusions: Most RF are compatible with beginning a racing career, but severe RF or multiple abnormal RF significantly compromise future racing career. Potential relevance: This study supports the use of routine radiographic programmes for detection of osteoarticular lesions in yearlings. A synthetic radiographic score, based on both the severity and the number of lesions, could be useful for breeders and trainers as complementary information to select their horses. [source]


    Evaluation of PG-M3 antibody in the diagnosis of acute promyelocytic leukaemia

    EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 10 2010
    Sanjeev Kumar Gupta
    Eur J Clin Invest 2010; 40 (10): 960,962 Abstract Background & objectives, Acute promyelocytic leukaemia (APL) is a distinct subtype of acute myeloid leukaemia (AML) characterized by a reciprocal translocation, t(15;17) and a high incidence of life-threatening coagulopathy. APL diagnosis is considered a medical emergency. As reverse transcription-polymerase chain reaction (RT-PCR) for PML-RAR, fusion oncoprotein is time consuming, there is a need for a rapid and accurate diagnostic test for APL. This study evaluates the role of PG-M3 monoclonal antibody using immunofluorescence (IF) in the early diagnosis of APL. Materials and Methods, Thirty-six new untreated APL cases diagnosed with RT-PCR for PML-RAR, as the gold standard and 38 non-APL controls (28 non-APL AMLs and 10 non-leukaemic samples) were evaluated by routine morphology and cytochemistry, RT-PCR and IF using PG-M3 monoclonal antibody. Results, Using IF, 34 of 36 (94·4%) APL cases showed a microgranular pattern suggestive of APL and two cases (5·6%) showed a speckled pattern typical of wild-type PML protein (False negative). By comparison, two of 28 (7·1%) non-APL AMLs showed microgranular pattern (false positive). Hence, IF as a diagnostic test for APL resulted in a sensitivity of 94·4%, specificity of 92·9% and positive and negative predictive values of 94·4% and 92·9% respectively. All 10 non-leukaemic samples showed a speckled pattern. Conclusions, IF using PG-M3 antibodies can be used as a rapid (takes 2 h), cheap, sensitive and specific method to identify APL. It can be a useful adjunct for diagnosis of APL especially if facilities for RT-PCR are not available, particularly in resource-limited settings. [source]


    Fluorescence-controlled Er:YAG laser for caries removal in permanent teeth: a randomized clinical trial

    EUROPEAN JOURNAL OF ORAL SCIENCES, Issue 2 2008
    Henrik Dommisch
    The aim of this randomized clinical study was to compare the efficacy of a fluorescence-controlled erbium-loaded yttrium aluminum garnet (Er:YAG) laser with conventional bur treatment for caries therapy in adults. Twenty-six patients with 102 carious lesions were treated using either the Er:YAG laser, at threshold levels of 7, 8, 9, and 10 [U], or rotary burs. Both techniques were applied to each lesion at separate locations. After treatment, dentine samples were obtained using a carbide bur. The viable counts of Streptococcus mutans (SM) and lactobacilli (LB) [expressed as colony-forming units (log10 CFUs)], treatment time, pain, vibration, and sound intensity were determined. The median numbers of CFUs for SM and LB were not statistically different between laser and bur treatment at threshold levels 7 and 8 [U]. At threshold levels 9 and 10 [U], the median number of CFUs for LB [1.11 (range: 0.00,2.04)] were significantly higher following laser treatment than following bur treatment [0.30 (range: 0.00,0.60)]. The results indicate that treatment with a fluorescence-controlled Er:YAG laser at threshold levels of 7 and 8 removed caries to a level similar to that achieved using conventional bur treatment, with clinically irrelevant amounts of remaining bacteria. Although more time consuming, laser treatment provided higher patient comfort than bur treatment. [source]


    DFT/MM Study on Copper-Catalyzed Cyclopropanation , Enantioselectivity with No Enthalpy Barrier

    EUROPEAN JOURNAL OF ORGANIC CHEMISTRY, Issue 33 2008
    Galí Drudis-Solé
    Abstract The enantioselectivity in the reaction of [Cu(adam-box)(CHCO2Me)] {adam-box = 2,2,-isopropylidenebis[(4R)-(1-adamantyl)-2-oxazoline]} with Ph2C=CH2 was analyzed computationally by ONIOM(B3LYP:UFF) calculations. The lack of transition states in the potential-energy surface precludes the use of conventional approaches and requires the definition of reaction paths in an approximate Gibbs free-energy surface. The procedure is time consuming and intrinsically less accurate than the usual approaches based on enthalpic energy surfaces, but it produces results in reasonable agreement with experiment, which furthermore allow identification of the key interactions responsible for chiral discrimination.(© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2008) [source]


    Feedforward neural network-based transient stability analysis of electric power systems

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2006
    H. Hadj Abdallah
    Abstract This paper presents a neural approach for the transient stability analysis of electric power systems (EPS). The transient stability of an EPS expresses the ability of the system to preserve synchronism after sudden severe disturbances. Its analysis needs the computation of the critical clearing time (CCT), which determines the security degree of the system. The classical methods for the determination of the CCT are computation time consuming and may be not treatable in real time. A feedforward neural network trained off-line using an historical database can approximate the simulation studies to give in real time an accurate estimate of the CCT. The identified neural network can be updated using new significant data to learn more disturbance cases. Numerical simulations are presented to illustrate the proposed method. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    REACH-driven developments in analysis and physicochemistry,

    FLAVOUR AND FRAGRANCE JOURNAL, Issue 3 2010
    A. Chaintreau
    Abstract The enforcement of the REACH regulation in the fragrance domain has created new challenges for the analytical and physical chemist. Many chemicals used as perfumery ingredients are hydrophobic, because low-polar compounds exhibit a higher substantivity (i.e. persistence after application) than do polar compounds. As a result, the usual protocols are often unsuitable and new methods must be developed. Biodegradation studies sometimes call for the quantification of traces of such hydrophobic analytes in complex media (e.g. waste water, aqueous surfactant solutions). Existing sample preparation techniques are either inefficient or time consuming. A new approach is proposed, based on single-use absorbants, which allows accurate quantification down to the 100 ppb range. This extremely simple technique allows good throughput analyses. Determining the environmental profile of a compound requires the determination of some physical constants. Among these, solubility in water can be obtained from theoretical models or experimentally, but the resulting values may greatly differ as a function of the model or the protocol. Several experimental approaches are critically discussed and compared with a reference technique. The air-to-water partition coefficients are determined by using an improved version of the previously developed static-and-trapped headspace technique. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    The use of indicator taxa as representatives of communities in bioassessment

    FRESHWATER BIOLOGY, Issue 8 2005
    R. C. NIJBOER
    Summary 1. Sampling and processing of benthic macroinvertebrate samples is time consuming and expensive. Although a number of cost-cutting options exist, a frequently asked question is how representative a subset of data is of the whole community, in particular in areas where habitat diversity is high (like Dutch surface water habitats). 2. Weighted averaging was used to reassign 650 samples to a typology of 40 community types, testing the representativeness of different subsets of data: (i) four different types of data (presence/absence, raw, 2log- and ln-transformed abundance), (ii) three subsets of ,indicator' taxa (taxa with indicator weights 4,12, 7,12, and 10,12) and (iii) single taxonomic groups (n = 14) by determining the classification error. 3. 2log- and ln-transformed abundances resulted in the lowest classification error, whilst the use of qualitative data resulted in a reduction of 10% of the samples assigned to their original community type compared to the use of ln-transformed abundance data. 4. Samples from community types with a high number of unique indicator taxa had the lowest classification error, and classification error increased as similarity among community types increased. Using a subset of indicator taxa resulted in a maximum increase of the classification error of 15% when only taxa with an indicator weight 10,12 were included (error = 49.1%). 5. Use of single taxonomic groups resulted in high classification error, the lowest classification error was found using Trichoptera (68%), and was related to the frequency of the taxonomic group among samples and the indicator weights of the taxa. 6. Our findings that the use of qualitative data, subsets of indicator taxa or single taxonomic groups resulted in high classification error implies low taxonomic redundancy, and supports the use of all taxa in characterising a macroinvertebrate community, in particular in areas where habitat diversity is high. [source]


    Identifying connections in a fractured rock aquifer using ADFTs

    GROUND WATER, Issue 3 2005
    Todd Halihan
    Fractured rock aquifers are difficult to characterize because of their extremely heterogeneous nature. Developing an understanding of fracture network hydraulic properties in these aquifers is difficult and time consuming, and field testing techniques for determining the location and connectivity of fractures in these aquifers are limited. In the Clare Valley, South Australia, well interference is an important issue for a major viticultural area that uses a fractured aquifer. Five fracture sets exist in the aquifer, all dipping >25°. In this setting, we evaluate the ability of steady-state asymmetric dipole-flow tests (ADFTs) to determine the connections between a test well and a set of piezometers. The procedure involves dividing a test well into two chambers using a single packer and pumping fluid from the upper chamber to the lower chamber. By conducting a series of tests at different packer elevations, an "input" signal is generated in fracture zones connected to the test well. By monitoring the "output" response of the hydraulic dipole field at piezometers, the connectivity of the fractures between the test well and piezometers can be determined. Results indicate the test well used in this study is connected in a complex three-dimensional geometry, with drawdown occurring above and below areas of potentiometric buildup. The ADFT method demonstrates that the aquifer evaluated in this study cannot be modeled effectively on the well scale using continuum flow models. [source]


    Multi-variable and multi-site calibration and validation of SWAT in a large mountainous catchment with high spatial variability

    HYDROLOGICAL PROCESSES, Issue 5 2006
    Wenzhi Cao
    Abstract Many methods developed for calibration and validation of physically based distributed hydrological models are time consuming and computationally intensive. Only a small set of input parameters can be optimized, and the optimization often results in unrealistic values. In this study we adopted a multi-variable and multi-site approach to calibration and validation of the Soil Water Assessment Tool (SWAT) model for the Motueka catchment, making use of extensive field measurements. Not only were a number of hydrological processes (model components) in a catchment evaluated, but also a number of subcatchments were used in the calibration. The internal variables used were PET, annual water yield, daily streamflow, baseflow, and soil moisture. The study was conducted using an 11-year historical flow record (1990,2000); 1990,94 was used for calibration and 1995,2000 for validation. SWAT generally predicted well the PET, water yield and daily streamflow. The predicted daily streamflow matched the observed values, with a Nash,Sutcliffe coefficient of 0·78 during calibration and 0·72 during validation. However, values for subcatchments ranged from 0·31 to 0·67 during calibration, and 0·36 to 0·52 during validation. The predicted soil moisture remained wet compared with the measurement. About 50% of the extra soil water storage predicted by the model can be ascribed to overprediction of precipitation; the remaining 50% discrepancy was likely to be a result of poor representation of soil properties. Hydrological compensations in the modelling results are derived from water balances in the various pathways and storage (evaporation, streamflow, surface runoff, soil moisture and groundwater) and the contributions to streamflow from different geographic areas (hill slopes, variable source areas, sub-basins, and subcatchments). The use of an integrated multi-variable and multi-site method improved the model calibration and validation and highlighted the areas and hydrological processes requiring greater calibration effort. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Accelerating strategies to the numerical simulation of large-scale models for sequential excavation

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2007
    M. Noronha
    Abstract In this paper, a novel combination of well-established numerical procedures is explored in order to accelerate the simulation of sequential excavation. Usually, large-scale models are used to represent these problems. Due to the high number of equations involved, the solver algorithm represents the critical aspect which makes the simulation very time consuming. The mutable nature of the excavation models makes this problem even more pronounced. To accomplish the representation of geometrical and mechanical aspects in an efficient and simple manner, the proposed solution employs the boundary element method with a multiple-region strategy. Together with this representational system, a segmented storage scheme and a time-ordered tracking of the changes form an adequate basis for the usage of fast updating methods instead of frontal solvers. The present development employs the Sherman,Morrison,Woodbury method to speed up the calculation due to sequential changes. The efficiency of the proposed framework is illustrated through the simulation of test examples of 2D and 3D models. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Fifth-order Hermitian schemes for computational linear aeroacoustics

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2007
    Article first published online: 17 APR 200, G. Capdeville
    Abstract We develop a class of fifth-order methods to solve linear acoustics and/or aeroacoustics. Based on local Hermite polynomials, we investigate three competing strategies for solving hyperbolic linear problems with a fifth-order accuracy. A one-dimensional (1D) analysis in the Fourier series makes it possible to classify these possibilities. Then, numerical computations based on the 1D scalar advection equation support two possibilities in order to update the discrete variable and its first and second derivatives: the first one uses a procedure similar to that of Cauchy,Kovaleskaya (the ,,-P5 scheme'); the second one relies on a semi-discrete form and evolves in time the discrete unknowns by using a five-stage Runge,Kutta method (the ,RGK-P5 scheme'). Although the RGK-P5 scheme shares the same local spatial interpolator with the ,-P5 scheme, it is algebraically simpler. However, it is shown numerically that its loss of compactness reduces its domain of stability. Both schemes are then extended to bi-dimensional acoustics and aeroacoustics. Following the methodology validated in (J. Comput. Phys. 2005; 210:133,170; J. Comput. Phys. 2006; 217:530,562), we build an algorithm in three stages in order to optimize the procedure of discretization. In the ,reconstruction stage', we define a fifth-order local spatial interpolator based on an upwind stencil. In the ,decomposition stage', we decompose the time derivatives into simple wave contributions. In the ,evolution stage', we use these fluctuations to update either by a Cauchy,Kovaleskaya procedure or by a five-stage Runge,Kutta algorithm, the discrete variable and its derivatives. In this way, depending on the configuration of the ,evolution stage', two fifth-order upwind Hermitian schemes are constructed. The effectiveness and the exactitude of both schemes are checked by their applications to several 2D problems in acoustics and aeroacoustics. In this aim, we compare the computational cost and the computation memory requirement for each solution. The RGK-P5 appears as the best compromise between simplicity and accuracy, while the ,-P5 scheme is more accurate and less CPU time consuming, despite a greater algebraic complexity. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Evaluation of glargine group-start sessions in patients with type 2 diabetes as a strategy to deliver the service

    INTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 2 2007
    A. A. Tahrani
    Summary Improving glycaemic control in patients with type 2 diabetes reduces microvascular complications. The national service framework for diabetes and the new general medical service contract have been aiming to direct more focus on improving HbA1c. These measures have resulted in increasing number of patients being initiated on insulin therapy, which increases the workload of diabetes specialist nurses (DSNs). Initiating insulin on a one-to-one basis is time consuming. As a result DSN-led insulin group-start sessions were introduced. To evaluate DSN-led glargine group-start and self-titration as a strategy of providing service. We assessed the impact of this method on the use of DSNs time, HbA1c and on patients' satisfaction. A prospective audit in a district general hospital. Groups of 5,7 patients received two 2-h sessions at weeks 0 and 2. During these sessions, patients were initiated on insulin glargine and received an educational package and a self-titration protocol. DSNs did not see patients after week 2. Patients were able to phone the DSNs for advice till the end of the titration period. Patients completed Diabetes Treatment Satisfaction Questionnaire (DTSQ) at baseline, week 2 and 12 months. Weight and HbA1c were assessed at base line and 12 months later. Twenty-nine consecutive patients were included. Baseline HbA1c improved at 6 months and remained stable at 12 months (medians 10.0, 8.7 and 8.9 respectively, p < 0.001). DTSQ score improved between week 0 and 2 and this was maintained at 12 months (medians 26, 35 and 34 respectively, p < 0.001). After week 2, the DSNs spent a median of 21 min advising patients by phone during the titration period. Weight did not increase significantly. In our centre, DSN-led insulin group-start sessions and self-titration improved glycaemic control. Patients were satisfied with this method of starting insulin. This was achieved with minimal DSNs time and input and proved to be effective, yet less time consuming. [source]


    Report: Dermoscopy as a diagnostic tool in demodicidosis

    INTERNATIONAL JOURNAL OF DERMATOLOGY, Issue 9 2010
    Rina Segal MD
    Background, The in vivo demonstration of Demodex infestation is traditionally based on the microscopic identification of Demodex mites, which is time consuming and requires specific equipment and a trained observer. Objective, The aim of this study was to describe for the first time the use of polarized-light dermoscopy for the diagnosis of demodicidosis in patients with variable clinical presentations. Methods, A total of 72 patients with variable facial eruptions were examined clinically, microscopically, and dermoscopically for the presence of Demodex mites. Results, Of the 72 patients, 55 were found to have demodicidosis. In 54 patients, the dermoscopy examination yielded a specific picture consisting of Demodex "tails" and Demodex follicular openings. In patients with an inflammatory variant of demodicidosis, reticular horizontal dilated blood vessels were also visualized. Microscopically, skin scrapings demonstrated Demodex in 52 patients. Overall, the dermoscopy findings showed excellent agreement with the microscopy findings (kappa value 0.86, 95% CI 0.72,0.99, P < 0.001). In the remaining 17 patients, there was no evidence of Demodex infestation either microscopically or dermoscopically. Limitations, The study was not blinded. As there are no standards for the diagnosis of demodicidosis, our results were based on criteria developed by our research group. Conclusions, This is the first description of the specific dermoscopic findings associated with variable clinical presentations of demodicidosis. Dermoscopy may serve as a valuable tool for the real-time validation of Demodex infestation and the evaluation and follow-up of affected patients. [source]


    Mining interesting sequential patterns for intelligent systems

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2005
    Show-Jane Yen
    Mining sequential patterns means to discover sequential purchasing behaviors of most customers from a large number of customer transactions. Past transaction data can be analyzed to discover customer purchasing behaviors such that the quality of business decisions can be improved. However, the size of the transaction database can be very large. It is very time consuming to find all the sequential patterns from a large database, and users may be only interested in some sequential patterns. Moreover, the criteria of the discovered sequential patterns for user requirements may not be the same. Many uninteresting sequential patterns for user requirements can be generated when traditional mining methods are applied. Hence, a data mining language needs to be provided such that users can query only knowledge of interest to them from a large database of customer transactions. In this article, a data mining language is presented. From the data mining language, users can specify the items of interest and the criteria of the sequential patterns to be discovered. Also, an efficient data mining technique is proposed to extract the sequential patterns according to the users' requests. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 73,87, 2005. [source]


    Quantification of red blood cell fragmentation by the automated hematology analyzer XE-2100 in patients with living donor liver transplantation

    INTERNATIONAL JOURNAL OF LABORATORY HEMATOLOGY, Issue 5 2005
    S. BANNO
    Summary The fragmented red cell (FRC) is a useful index for diagnosing and determining the severity of thrombotic thrombocytopenic purpura (TTP), thrombotic microangiopathy (TMA) and other similar conditions, as it is found in peripheral blood in patients with these diseases. The FRC expression rate has conventionally been determined by manual methods using smear samples. However, it is difficult to attain accurate quantification by such methods as they are time consuming and prone to a great margin of error. With cases of living donor liver transplantation, the current study examined the possibility of using a multi-parameter automated hematology analyzer, the XE-2100 (Sysmex Corporation) for FRC quantification. While there was a notable correlation between the manual and automated measurements, the manual measurement resulted in higher values. This suggested remarkable variations in judgment by individuals. The FRC values had a significant correlation with the reticulocyte count, red blood cell distribution width (RDW), fibrin/fibrinogen degradation products (P-FDP) and lactate dehydrogenase (LDH) among the test parameters, and this finding was consistent with the clinical progression in patients. The automated method can offer precise measurements in a short time without inter-observer differences, meeting the requirement for standardization. The determination of FRC count (%) by the XE-2100 that enables early diagnoses and monitoring of TTP or TMA will be useful in the clinical field. [source]


    Integral evaluation in semiconductor device modelling using simulated annealing with Bose,Einstein statistics

    INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 4 2007
    E.A.B. Cole
    Abstract Fermi integrals arise in the mathematical and numerical modelling of microwave semiconductor devices. In particular, associated Fermi integrals involving two arguments arise in the modelling of HEMTs, in which quantum wells form at the material interfaces. The numerical evaluation of these associated integrals is time consuming. In this paper, these associated integrals are replaced by simpler functions which depend on a small number of optimal parameters. These parameters are found by optimizing a suitable cost function using a genetic algorithm with simulated annealing. A new method is introduced whereby the transition probabilities of the simulated annealing process are based on the Bose,Einstein distribution function, rather than on the more usual Maxwell,Boltzmann statistics or Tsallis statistics. Results are presented for the simulation of a four-layer HEMT, and show the effect of the approximation for the associated Fermi integrals. A comparison is made of the convergence properties of the three different statistics used in the simulated annealing process. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Design of waveguide microwave filters by means of artificial neural networks

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 6 2006
    Antonio Luchetta
    Abstract Cylindrical post-based waveguide filters are a relevant component of antenna feeding networks. Their synthesis performed via automatic optimization based on full-wave analyses can be very time consuming. In this article a novel fast-design approach based on Levy's and Moore's algorithms and an artificial neural network (ANN) architecture is presented. © 2006 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2006. [source]


    Protein profile study of breast-tissue homogenates by HPLC-LIF

    JOURNAL OF BIOPHOTONICS, Issue 5 2009
    K. Kalyan Kumar
    Abstract Proteomics is a promising approach for molecular understanding of neoplastic processes including response to treatment. Widely used 2D-gel electrophoresis/Liquid chromatography coupled with mass spectrometry (LC-MS) are time consuming and not cost effective. We have developed a high-sensitivity (femto/subfemtomoles of protein/20 ,l) High Performance Liquid Chromatography-Laser Induced Fluorescence HPLC-LIF instrument for studying protein profiles of biological samples. In this study, we have explored the feasibility of classifying breast tissues by multivariate analysis of chromatographic data. We have analyzed 13 normal, 17 malignant, 5 benign and 4 post-treatment breast-tissue homogenates. Data was analyzed by Principal Component Analysis PCA in both unsupervised and supervised modes on derivative and baseline-corrected chromatograms. Our findings suggest that PCA of derivative chromatograms gives better classification. Thus, the HPLC-LIF instrument is not only suitable for generation of chromatographic data using femto/subfemto moles of proteins but the data can also be used for objective diagnosis via multivariate analysis. Prospectively, identified fractions can be collected and analyzed by biochemical and/or MS methods. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]