Analysis Methods (analysis + methods)

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of Analysis Methods

  • data analysis methods
  • statistical analysis methods
  • survival analysis methods


  • Selected Abstracts


    MATCHING RESULTS OF TWO INDEPENDENT HIGHLY TRAINED SENSORY PANELS USING DIFFERENT DESCRIPTIVE ANALYSIS METHODS,

    JOURNAL OF SENSORY STUDIES, Issue 5 2002
    VARAPHA LOTONG
    ABSTRACT Two independent, highly trained panels separately conducted descriptive analysis of orange juices using different descriptive analysis methods and sets of samples. Lexicons were developed independently. One panel evaluated 23 orange juice products and identified and referenced 24 attributes. The other panel evaluated 17 products and identified 17 attributes for testing. Though not identical, the lexicons developed by both panels were similar overall. To compare the sensory space of the product category, Principal Component Analysis (PCA) and sensory maps were developed separately for each panel. The comparison showed that the underlying sample spaces obtained from both panels were comparable in many ways. Key flavor characteristics for the same types of orange juice products were described similarly by both panels. These data indicate that the process of using highly trained panels that define attributes and use reference standards for descriptive sensory analysis can give objective and comparable information for a product category across different panels. [source]


    A COMPARISON OF ANALYSIS METHODS FOR LATE-STAGE VARIETY EVALUATION TRIALS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2010
    Sue J. Welham
    Summary The statistical analysis of late-stage variety evaluation trials using a mixed model is described, with one- or two-stage approaches to the analysis. Two sets of trials, from Australia and the UK, were used to provide realistic scenarios for a simulation study to evaluate the different methods of analysis. This study showed that a one-stage approach gave the most accurate predictions of variety performance overall or within each environment, across a range of models, as measured by mean squared error of prediction or realized genetic gain. A weighted two-stage approach performed adequately for variety predictions both overall and within environments, but a two-stage unweighted approach performed poorly in both cases. A generalized heritability measure was developed to compare methods. [source]


    The End of an Era: What Became of the "Managed Care Revolution" in 2001?

    HEALTH SERVICES RESEARCH, Issue 1p2 2003
    Cara S. Lesser
    Objective. To describe how the organization and dynamics of health systems changed between 1999 and 2001, in the context of expectations from the mid-1990s when managed care was in ascendance, and assess the implications for consumers and policymakers. Data Sources/Study Setting. Data are from the Community Tracking Study site visits to 12 communities that were randomly selected to be nationally representative of metropolitan areas with 200,000 people or more. The Community Tracking Study is an ongoing effort that began in 1996 and is fielded every two years. Study Design. Semistructured interviews were conducted with 50,90 stakeholders and observers of the local health care market in each of the 12 communities every two years. Respondents include leaders of local hospitals, health plans, and physician organizations and representatives of major employers, state and local governments, and consumer groups. First round interviews were conducted in 1996,1997 and subsequent rounds of interviews were conducted in 1998,1999 and 2000,2001. A total of 1,690 interviews were conducted between 1996 and 2001. Data Analysis Methods. Interview information was stored and coded in qualitative data analysis software. Data were analyzed to identify patterns and themes within and across study sites and conclusions were verified by triangulating responses from different respondent types, examining outliers, searching for disconfirming evidence, and testing rival explanations. Principal Findings. Since the mid-1990s, managed care has developed differently than expected in local health care markets nationally. Three key developments shaped health care markets between 1999 and 2001: (1) unprecedented, sustained economic growth that resulted in extremely tight labor markets and made employers highly responsive to employee demands for even fewer restrictions on access to care; (2) health plans increasingly moved away from core strategies in the "managed care toolbox"; and (3) providers gained leverage relative to managed care plans and reverted to more traditional strategies of competing for patients based on services and amenities. Conclusions. Changes in local health care markets have contributed to rising costs and created new access problems for consumers. Moreover, the trajectory of change promises to make the goals of cost-control and quality improvement more difficult to achieve in the future. [source]


    Rationale and Applicability of Analysis Methods for Identification of Chromosomal Fragile Sites,A Reply to Drs Dahm et al.

    BIOMETRICS, Issue 4 2002
    Chia-Ding Hou
    No abstract is available for this article. [source]


    Qualitative Data Collection and Analysis Methods: The INSTINCT Trial

    ACADEMIC EMERGENCY MEDICINE, Issue 11 2007
    William J. Meurer MD
    Patient care practices often lag behind current scientific evidence and professional guidelines. The failure of such knowledge translation (KT) efforts may reflect inadequate assessment and management of specific barriers confronting both physicians and patients at the point of treatment level. Effective KT in this setting may benefit from the use of qualitative methods to identify and overcome these barriers. Qualitative methodology allows in-depth exploration of the barriers involved in adopting practice change and has been infrequently used in emergency medicine research. The authors describe the methodology for qualitative analysis within the INcreasing Stroke Treatment through INteractive behavioral Change Tactics (INSTINCT) trial. This includes processes for valid data collection and reliable analysis of the textual data from focus group and interview transcripts. INSTINCT is a 24-hospital, randomized, controlled study that is designed to evaluate a system-based barrier assessment and interactive educational intervention to increase appropriate tissue plasminogen activator (tPA) use in ischemic stroke. Intervention hospitals undergo baseline barrier assessment using both qualitative as well as quantitative (survey) techniques. Investigators obtain data on local barriers to tPA use, as well as information on local attitudes, knowledge, and beliefs regarding acute stroke treatment. Targeted groups at each site include emergency physicians, emergency nurses, neurologists, radiologists, and hospital administrators. Transcript analysis using NVivo7 with a predefined barrier taxonomy is described. This will provide both qualitative insight on thrombolytic use and importance of specific barrier types for each site. The qualitative findings subsequently direct the form of professional education efforts and system interventions at treatment sites. [source]


    Understanding nursing on an acute stroke unit: perceptions of space, time and interprofessional practice

    JOURNAL OF ADVANCED NURSING, Issue 9 2009
    Cydnee C. Seneviratne
    Abstract Title.,Understanding nursing on an acute stroke unit: perceptions of space, time and interprofessional practice. Aim. This paper is a report of a study conducted to uncover nurses' perceptions of the contexts of caring for acute stroke survivors. Background. Nurses coordinate and organize care and continue the rehabilitative role of physiotherapists, occupational therapists and social workers during evenings and at weekends. Healthcare professionals view the nursing role as essential, but are uncertain about its nature. Method. Ethnographic fieldwork was carried out in 2006 on a stroke unit in Canada. Interviews with nine healthcare professionals, including nurses, complemented observations of 20 healthcare professionals during patient care, team meetings and daily interactions. Analysis methods included ethnographic coding of field notes and interview transcripts. Findings. Three local domains frame how nurses understand challenges in organizing stroke care: 1) space, 2) time and 3) interprofessional practice. Structural factors force nurses to work in exceptionally close quarters. Time constraints compel them to find novel ways of providing care. Moreover, sharing of information with other members of the team enhances relationships and improves ,interprofessional collaboration'. The nurses believed that an interprofessional atmosphere is fundamental for collaborative stroke practice, despite working in a multiprofessional environment. Conclusion. Understanding how care providers conceive of and respond to space, time and interprofessionalism has the potential to improve acute stroke care. Future research focusing on nurses and other professionals as members of interprofessional teams could help inform stroke care to enhance poststroke outcomes. [source]


    Suicide and Alcohol: Do Outlets Play a Role?

    ALCOHOLISM, Issue 12 2009
    Fred W. Johnson
    Background:, The purpose of this study was to determine whether the number of alcohol outlets in local and adjacent areas, in particular bars, was related over time to completed suicide and suicide attempts. There is evidence both from studies of individuals and time series aggregate studies, mostly at the national level, of substantial alcohol involvement in suicide, but no small-area, longitudinal studies have been carried out. The present study is the first that is both longitudinal and based on a large number of small spatial units, California zip codes, a level of resolution permitting analysis of the relationship between local alcohol access and suicide rates over time. Method:, Longitudinal data were obtained from 581 consistently defined zip code areas over 6 years (1995,2000) using data from the California Index Locations Database, a geographic information system that contains both population and place information with spatial attributes for the entire state. Measures obtained from each zip code included population characteristics (e.g., median age) and place characteristics (e.g., numbers of retail and alcohol outlets) which were related in separate analyses to (i) suicide mortality and (ii) the number of hospitalizations for injuries caused by suicide attempts. The effect of place characteristics in zip code areas adjacent to each of the 581 local zip codes (spatial lags) was also assessed. Analysis methods were random effects models corrected for spatial autocorrelation. Results:, Completed suicide rates were higher in zip code areas with greater local and lagged bar densities; and higher in areas with greater local but not lagged off-premise outlet densities. Whereas completed suicide rates were lower among blacks and Hispanics, completed suicide rates were higher among low income, older whites living in less densely populated areas, that is, rural areas. Rates of suicide attempts were higher in zip code areas with greater local but not lagged bar densities, and higher among low income younger whites living in smaller households and in rural areas. Rates of attempted suicide were also higher among blacks. Completed suicide and suicide attempt rates were lower in zip code areas with greater local restaurant densities; there were no lagged effects for restaurants. Conclusions:, Bar densities in particular appear related to suicide, meaning, because this is an aggregate-level spatial analysis, that suicides, both attempted and completed, occur at greater rates in rural community areas with greater bar densities. Because the suicide rate is highest in rural areas, this study suggests that although the number of completed and attempted suicides is no doubt greater in absolute numbers in urban areas, the suicide rate, both completed and attempted, is greater in rural areas, which draws attention, perhaps much needed, to the problems of rural America. [source]


    Approaches towards the Efficient Use of Resources in the Industry

    CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 4 2010
    M. Schmidt
    Abstract Resource efficiency in companies targets economic and efficient use of materials and energy in production. On the one hand, this aims to contribute towards sustainable development and, on the other hand, efficient use of resources can save costs and improve the competitiveness of a company. This aspect is becoming all the more important in the light of current developments in world market prices for natural resources. In Germany, the use of materials and energy currently accounts for about 46,% of the gross value of goods manufactured by companies. It is known from various sources that the average potential for savings here is 10,15,%. The material costs alone can be reduced by 2,3,% through efficient management. The potentials for saving lie less in the individual technologies applied and more in the interplay within and between the complex production systems. That is why one key challenge facing the industry is to ascertain the hidden costs that are in fact linked with inefficiencies in a company. Analysis methods and approaches are necessary for this, such as for example the material and energy flow analysis. [source]


    Nutrients apparent digestibility coefficients of selected protein sources for juvenile Siberian sturgeon (Acipenser baerii Brandt), compared by two chromic oxide analyses methods

    AQUACULTURE NUTRITION, Issue 6 2009
    H. LIU
    Abstract Apparent digestibility coefficients (ADCs) of dry matter (ADCd), crude protein (ADCp), energy (ADCe) and amino acids in selected feedstuffs were determined for juvenile Siberian sturgeon (8.38 ± 0.20 g). The tested feedstuffs were fishmeal (FM), meat and bone meal (MBM), poultry by-product meal, hydrolysed feather meal, fermented feather meal solvent-extracted cottonseed meal and soybean meal. ADCs were determined using a reference diet and test diets at 7 : 3 ratios with 5 g kg,1 chromic oxide (Cr2O3) as an inert marker. Fish were reared in a recirculating system and fed to apparent satiation five times daily. Cr2O3 in diets and faeces samples were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) and acid-digestion colorimetry (AC) methods, respectively. The results showed that ICP-AES method was more accurate for Cr2O3 determination than AC method, and the results determined by ICP-AES method were used in this study. ADCd and ADCp of seven tested ingredients were lowest for MBM (59.1 and 84.5%) and highest for FM (79.9 and 94.5%); ADCe of tested ingredients were from 71.8% for SECM to 93.2% for FM. ADCs of amino acid in test ingredients followed similar trend to the ADCp. The ADCs of individual amino acids varied from 61.6% (histidine in MBM) to 98.8% (valine in FM). [source]


    Advanced Analysis of Steel Frames Using Parallel Processing and Vectorization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2001
    C. M. Foley
    Advanced methods of analysis have shown promise in providing economical building structures through accurate evaluation of inelastic structural response. One method of advanced analysis is the plastic zone (distributed plasticity) method. Plastic zone analysis often has been deemed impractical due to computational expense. The purpose of this article is to illustrate applications of plastic zone analysis on large steel frames using advanced computational methods. To this end, a plastic zone analysis algorithm capable of using parallel processing and vector computation is discussed. Applicable measures for evaluating program speedup and efficiency on a Cray Y-MP C90 multiprocessor supercomputer are described. Program performance (speedup and efficiency) for parallel and vector processing is evaluated. Nonlinear response including postcritical branches of three large-scale fully restrained and partially restrained steel frameworks is computed using the proposed method. The results of the study indicate that advanced analysis of practical steel frames can be accomplished using plastic zone analysis methods and alternate computational strategies. [source]


    HOMELESS SHELTER USE AND REINCARCERATION FOLLOWING PRISON RELEASE,

    CRIMINOLOGY AND PUBLIC POLICY, Issue 2 2004
    STEPHEN METRAUX
    Research Summary: This paper examines the incidence of and interrelationships between shelter use and reincarceration among a cohort of 48,424 persons who were released from New York State prisons to New York City in 1995,1998. Results show that, within two years of release, 11.4% of the study group entered a New York City homeless shelter and 32.8% of this group was again imprisoned. Using survival analysis methods, time since prison release and history of residential instability were the most salient risk factors related to shelter use, and shelter use increased the risk of subsequent reincarceration. Policy Implications: These findings show both homelessness and reincarceration to be substantial problems among a population of released prisoners, problems that fall into the more general framework of community reintegration. They also suggest that enhanced housing and related services, when targeted to a relatively small at-risk group among this population, have the potential to substantially reduce the overall risk for homelessness in the group. [source]


    Early neurodevelopmental markers predictive of mortality in infants infected with HIV-1

    DEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 2 2003
    Antolin Llorente PhD
    One-hundred and fifty-seven vertically infected HIV-1 positive infants (85 males, 72 females) underwent longitudinal assessment to determine whether early neurodevelopmental markers are useful predictors of mortality in those infants who survive to at least 4 months of age. Survival analysis methods were used to estimate time to death for quartiles of 4-month scores (baseline) on the Bayley Scales of Infant Development (BSID). Cox proportional hazards progression was used to estimate relative hazard (RH, 95% CI) of death for BSID scores and potential confounders. Thirty infants with BSID scores at 4 months of age died during follow-up. Survival analysis revealed greater mortality rates in infants with BSID (Mental Developmental Index and Psychomotor Developmental Index) scores in the lower quartile(p=0.004,p=0.036). Unadjusted univariate analyses revealed increased mortality associated with baseline CD4+ 29%, gestational age <37 weeks, smaller head circumference, advanced HIV and higher plasma viral load. BSID scores independently predicted mortality after adjusting for treatment, clinical category, gestational age, plasma viral load and CD4+ percentage. [source]


    Measurement and data analysis methods for field-scale wind erosion studies and model validation,

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2003
    Ted M. Zobeck
    Abstract Accurate and reliable methods of measuring windblown sediment are needed to con,rm, validate, and improve erosion models, assess the intensity of aeolian processes and related damage, determine the source of pollutants, and for other applications. This paper outlines important principles to consider in conducting ,eld-scale wind erosion studies and proposes strategies of ,eld data collection for use in model validation and development. Detailed discussions include consideration of ,eld characteristics, sediment sampling, and meteorological stations. The ,eld shape used in ,eld-scale wind erosion research is generally a matter of preference and in many studies may not have practical signi,cance. Maintaining a clear non-erodible boundary is necessary to accurately determine erosion fetch distance. A ,eld length of about 300 m may be needed in many situations to approach transport capacity for saltation ,ux in bare agricultural ,elds. Field surface conditions affect the wind pro,le and other processes such as sediment emission, transport, and deposition and soil erodibility. Knowledge of the temporal variation in surface conditions is necessary to understand aeolian processes. Temporal soil properties that impact aeolian processes include surface roughness, dry aggregate size distribution, dry aggregate stability, and crust characteristics. Use of a portable 2 tall anemometer tower should be considered to quantify variability of friction velocity and aerodynamic roughness caused by surface conditions in ,eld-scale studies. The types of samplers used for sampling aeolian sediment will vary depending upon the type of sediment to be measured. The Big Spring Number Eight (BSNE) and Modi,ed Wilson and Cooke (MWAC) samplers appear to be the most popular for ,eld studies of saltation. Suspension ,ux may be measured with commercially available instruments after modi,cations are made to ensure isokinetic conditions at high wind speeds. Meteorological measurements should include wind speed and direction, air temperature, solar radiation, relative humidity, rain amount, soil temperature and moisture. Careful consideration of the climatic, sediment, and soil surface characteristics observed in future ,eld-scale wind erosion studies will ensure maximum use of the data collected. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Morphometric analysis and tectonic interpretation of digital terrain data: a case study

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 8 2003
    Gyozo Jordan
    Abstract Tectonic movement along faults is often re,ected by characteristic geomorphological features such as linear valleys, ridgelines and slope-breaks, steep slopes of uniform aspect, regional anisotropy and tilt of terrain. Analysis of digital elevation models, by means of numerical geomorphology, provides a means of recognizing fractures and characterizing the tectonics of an area in a quantitative way. The objective of this study is to investigate the use of numerical geomorphometric methods for tectonic geomorphology through a case study. The methodology is based on general geomorphometry. In this study, the basic geometric attributes (elevation, slope, aspect and curvatures) are complemented with the automatic extraction of ridge and valley lines and surface speci,c points. Evans' univariate and bivariate methodology of general geomorphometry is extended with texture (spatial) analysis methods, such as trend, autocorrelation, spectral, and network analysis. Terrain modelling is implemented with the integrated use of: (1) numerical differential geometry; (2) digital drainage network analysis; (3) digital image processing; and (4) statistical and geostatistical analysis. Application of digital drainage network analysis is emphasized. A simple shear model with principal displacement zone with an NE,SW orientation can account for most of the the morphotectonic features found in the basin by geological and digital tectonic geomorphology analyses. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Approximate analysis methods for asymmetric plan base-isolated buildings

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2002
    Keri L. Ryan
    Abstract An approximate method for linear analysis of asymmetric-plan, multistorey buildings is specialized for a single-storey, base-isolated structure. To find the mode shapes of the torsionally coupled system, the Rayleigh,Ritz procedure is applied using the torsionally uncoupled modes as Ritz vectors. This approach reduces to analysis of two single-storey systems, each with vibration properties and eccentricities (labelled ,effective eccentricities') similar to corresponding properties of the isolation system or the fixed-base structure. With certain assumptions, the vibration properties of the coupled system can be expressed explicitly in terms of these single-storey system properties. Three different methods are developed: the first is a direct application of the Rayleigh,Ritz procedure; the second and third use simplifications for the effective eccentricities, assuming a relatively stiff superstructure. The accuracy of these proposed methods and the rigid structure method in determining responses are assessed for a range of system parameters including eccentricity and structure flexibility. For a subset of systems with equal isolation and structural eccentricities, two of the methods are exact and the third is sufficiently accurate; all three are preferred to the rigid structure method. For systems with zero isolation eccentricity, however, all approximate methods considered are inconsistent and should be applied with caution, only to systems with small structural eccentricities or stiff structures. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Smoking-based selection and influence in gender-segregated friendship networks: a social network analysis of adolescent smoking

    ADDICTION, Issue 7 2010
    Liesbeth Mercken
    ABSTRACT Aims The main goal of this study was to examine differences between adolescent male and female friendship networks regarding smoking-based selection and influence processes using newly developed social network analysis methods that allow the current state of continuously changing friendship networks to act as a dynamic constraint for changes in smoking behaviour, while allowing current smoking behaviour to be simultaneously a dynamic constraint for changes in friendship networks. Design Longitudinal design with four measurements. Setting Nine junior high schools in Finland. Participants A total of 1163 adolescents (mean age = 13.6 years) who participated in the control group of the ESFA (European Smoking prevention Framework Approach) study, including 605 males and 558 females. Measurements Smoking behaviour of adolescents, parents, siblings and friendship ties. Findings Smoking-based selection of friends was found in male as well as female networks. However, support for influence among friends was found only in female networks. Furthermore, females and males were both influenced by parental smoking behaviour. Conclusions In Finnish adolescents, both male and female smokers tend to select other smokers as friends but it appears that only females are influenced to smoke by their peer group. This suggests that prevention campaigns targeting resisting peer pressure may be more effective in adolescent girls than boys. [source]


    Data Mining for Bioprocess Optimization

    ENGINEERING IN LIFE SCIENCES (ELECTRONIC), Issue 3 2004
    S. Rommel
    Abstract Although developed for completely different applications, the great technological potential of data analysis methods called "data mining" has increasingly been realized as a method for efficiently analyzing potentials for optimization and for troubleshooting within many application areas of process, technology. This paper presents the successful application of data mining methods for the optimization of a fermentation process, and discusses diverse characteristics of data mining for biological processes. For the optimization of biological processes a huge amount of possibly relevant process parameters exist. Those input variables can be parameters from devices as well as process control parameters. The main challenge of such optimizations is to robustly identify relevant combinations of parameters among a huge amount of process parameters. For the underlying process we found with the application of data mining methods, that the moment a special carbohydrate component is added has a strong impact on the formation of secondary components. The yield could also be increased by using 2 m3 fermentors instead of 1 m3 fermentors. [source]


    Near-nerve needle sensory and medial plantar nerve conduction studies in patients with small-fiber sensory neuropathy

    EUROPEAN JOURNAL OF NEUROLOGY, Issue 9 2008
    K. Uluc
    Background and purpose:, The aim of this prospective study was to show and compare the rate of large-fiber involvement with near-nerve needle sensory (NNNS) nerve conduction study (NCS) and with medial plantar NCS recorded with surface electrodes in a group of patients who had clinically pure small-fiber sensory neuropathy (SFSN) with reduced intra-epidermal nerve fiber density in skin biopsy and with normal routine NCS. Methods and results:, The study included 19 patients with clinically pure SFSN with normal routine NCS results and 17 healthy volunteers. Routine NCS, skin biopsy, medial plantar NCS and NNNS NCS were performed. NNNS NCS data were evaluated both by using univariate analysis methods and by using a multivariate analysis method, principal components analysis (PCA). Eight patients (42%) had abnormal results for medial plantar NCS with surface electrodes. Seven patients (37%) had abnormal results for NNNS NCS with PCA, whilst only four patients with univariate analysis. We found a significant correlation between intra-epidermal nerve fiber densities, medial plantar NCS and PCA results of NNNS NCS. Conclusions:, This study showed that large-nerve fibers are also involved in some patients with pure SFSN and medial plantar NCS can accurately diagnose neuropathy without a need for NNNS NCS in patients with normal routine NCS. [source]


    Trends, challenges and opportunities in power quality research

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 1 2010
    Math H. J. Bollen
    Abstract This paper outlines a number of possible research directions in power quality. The introduction of new sources of generation will introduce the need for new research on voltage,magnitude variations, harmonic emission and harmonic resonance. Statistical performance indicators are expected to play an important role in addressing the hosting capacity of the power system for these new sources. The quickly growing amounts of power-quality data call for automatic analysis methods. Advanced signal-processing tools need to be developed and applied to address this challenge. Equipment with an active power-electronic interface generates waveform distortion at higher frequencies than existing equipment. The emission, spread, consequences and mitigation of this distortion require more research emphasis. The growing complexity of the power system calls for remote identification of system events and load transitions. Future DC networks, at different voltage levels, require the research on DC power quality next to AC power quality. Research on methods to describe and analyse time-varying harmonics has applications in a number of the above-mentioned issues. So does the use of hardware-in-the-loop (HIL) and real-time-digital simulation. Existing power quality standards should not form a barrier against future research; instead research should result in improved standards as well as completely new concepts. Examples are: voltage dips in three-phase systems, flicker due to non-incandescent lamps, and voltage variations on the timescale between 1,second and 10,minutes. All together, it is concluded in this paper that sufficient important and interesting research challenges and opportunities remain in the power quality area. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Serially concatenated continuous phase modulation with symbol interleavers: performance, properties and design principles

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2006
    Ming Xiao
    Serially concatenated continuous phase modulation (SCCPM) systems with symbol interleavers are investigated. The transmitted signals are disturbed by additive white Gaussian noise (AWGN). Compared to bit interleaved SCCPM systems, this scheme shows a substantial improvement in the convergence threshold at the price of a higher error floor. In addition to showing this property, we also investigate the underlying reason by error event analysis. In order to estimate bit error rate (BER) performance, we generalise traditional union bounds for a bit interleaver to this non-binary interleaver. For the latter, both the order and the position of permuted non-zero symbols have to be considered. From the analysis, some principal properties are identified. Finally, some design principles are proposed. Our paper concentrates on SCCPM, but the proposed analysis methods and conclusions can be widely used in many other systems such as serially concatenated trellis coded modulation (SCTCM) et cetera. Copyright © 2006 AEIT [source]


    An iris recognition approach through structural pattern analysis methods

    EXPERT SYSTEMS, Issue 1 2010
    Hugo Proença
    Abstract: Continuous efforts have been made to improve the robustness of iris coding methods since Daugman's pioneering work on iris recognition was published. Iris recognition is at present used in several scenarios (airport check-in, refugee control etc.) with very satisfactory results. However, in order to achieve acceptable error rates several imaging constraints are enforced, which reduce the fluidity of the iris recognition systems. The majority of the published iris recognition methods follow a statistical pattern recognition paradigm and encode the iris texture information through phase, zero-crossing or texture-analysis based methods. In this paper we propose a method that follows the structural (syntactic) pattern recognition paradigm. In addition to the intrinsic advantages of this type of approach (intuitive description and human perception of the system functioning), our experiments show that the proposed method behaves comparably to the statistical approach that constitutes the basis of nearly all deployed systems. [source]


    Fracture analysis of composite co-cured structural joints using decohesion elements

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 9 2004
    P. P. CAMANHO
    ABSTRACT Delamination is one of the predominant forms of failure in laminated composite structures, especially when there is no reinforcement in the thickness direction. To develop composite structures that are more damage tolerant, it is necessary to understand how delamination develops, and how it can affect the residual performance. A number of factors such as residual thermal stresses, matrix-curing shrinkage and manufacturing defects affect how damage will grow in a composite structure. It is important to develop computationally efficient analysis methods that can account for all such factors. The objective of the current work is to apply a newly developed decohesion element to investigate the debond strength of skin-stiffener composite specimens. The process of initiation of delaminations and the propagation of delamination fronts is investigated. The numerical predictions are compared with published experimental results. [source]


    The optimization of protein secondary structure determination with infrared and circular dichroism spectra

    FEBS JOURNAL, Issue 14 2004
    Keith A. Oberg
    We have used the circular dichroism and infrared spectra of a specially designed 50 protein database [Oberg, K.A., Ruysschaert, J.M. & Goormaghtigh, E. (2003) Protein Sci. 12, 2015,2031] in order to optimize the accuracy of spectroscopic protein secondary structure determination using multivariate statistical analysis methods. The results demonstrate that when the proteins are carefully selected for the diversity in their structure, no smaller subset of the database contains the necessary information to describe the entire set. One conclusion of the paper is therefore that large protein databases, observing stringent selection criteria, are necessary for the prediction of unknown proteins. A second important conclusion is that only the comparison of analyses run on circular dichroism and infrared spectra independently is able to identify failed solutions in the absence of known structure. Interestingly, it was also found in the course of this study that the amide II band has high information content and could be used alone for secondary structure prediction in place of amide I. [source]


    A Closer Look Inside Nanotubes: Pore Structure Evaluation of Anodized Alumina Templated Carbon Nanotube Membranes Through Adsorption and Permeability Studies

    ADVANCED FUNCTIONAL MATERIALS, Issue 15 2010
    Georgios Pilatos
    Abstract Although hollow nanostructures, such as nanotubes, represent a major portion of nanoscaled materials with a tremendously large application range, a detailed evaluation of their internal characteristics still remains elusive. Transmission electron microscopy is the most common analytical technique to examine the internal configuration of these structures, yet it can only provide evidence of a minimal portion of the overall material, thus, it cannot be accurately generalized. In the present paper, in addition to electron microscopy and other spot-size analysis methods (X-ray diffraction, Raman spectroscopy, etc.), a combination of techniques including adsorption, permeability, and relative permeability are employed in order to provide important insights into various crucial details of the overall internal surface and hollow-space characteristics of carbon nanotube (CNT) arrays and membranes. The CNT arrays are fabricated using anodized alumina as a template in a flow-through chemical vapor deposition (CVD) reactor. This is the first systematic approach for investigating the internal configuration of template-based CNT arrays in detail. Key findings are made for the customized optimization of the resulting nanotube membranes for a variety of applications, including separations, nanofluidics and nanoreactors, biological capturing and purification, and controlled drug delivery and release. [source]


    Interpreting analyses of continuous covariates in affected sibling pair linkage studies

    GENETIC EPIDEMIOLOGY, Issue 6 2007
    Silke Schmidt
    Abstract Datasets collected for linkage analyses of complex human diseases often include a number of clinical or environmental covariates. In this study, we evaluated the performance of three linkage analysis methods when the relationship between continuous covariates and disease risk or linkage heterogeneity was modeled in three different ways: (1) The covariate distribution is determined by a quantitative trait locus (QTL), which contributes indirectly to the disease risk; (2) the covariate is not genetically determined, but influences the disease risk through statistical interaction with a disease susceptibility locus; (3) the covariate distribution differs in families linked or unlinked to a particular disease susceptibility locus. We analyzed simulated datasets with a regression-based QTL analysis, a nonparametric analysis of the binary affection status, and the ordered subset analysis (OSA). We found that a significant OSA result may be due to a gene that influences variability in the population distribution of a continuous disease risk factor. Conversely, a regression-based QTL analysis may detect the presence of gene-environment (G × E) interaction in a sample of primarily affected individuals. The contribution of unaffected siblings and the size of baseline lod scores may help distinguish between QTL and G × E models. As illustrated by a linkage study of multiplex families with age-related macular degeneration, our findings assist in the interpretation of analysis results in real datasets. They suggest that the side-by-side evaluation of OSA and QTL results may provide important information about the relationship of measured covariates with either disease risk or linkage heterogeneity. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


    On the Application of Inductive Machine Learning Tools to Geographical Analysis

    GEOGRAPHICAL ANALYSIS, Issue 2 2000
    Mark Gahegan
    Inductive machine learning tools, such as neural networks and decision trees, offer alternative methods for classification, clustering, and pattern recognition that can, in theory, extend to the complex or "deep" data sets that pervade geography. By contrast, traditional statistical approaches may fail, due to issues of scalability and flexibility. This paper discusses the role of inductive machine learning as it relates to geographical analysis. The discussion presented is not based on comparative results or on mathematical description, but instead focuses on the often subtle ways in which the various inductive learning approaches differ operationally, describing (1) the manner in which the feature space is partitioned or clustered, (2) the search mechanisms employed to identify good solutions, and (3) the different biases that each technique imposes. The consequences arising from these issues, when considering complex geographic feature spaces, are then described in detail. The overall aim is to provide a foundation upon which reliable inductive analysis methods can be constructed, instead of depending on piecemeal or haphazard experimentation with the various operational criteria that inductive learning tools call for. Often, it would appear that these criteria are not well understood by practitioners in the geographic sphere, which can lead to difficulties in configuration and operation, and ultimately to poor performance. [source]


    Hepatitis C infection in hemodialysis patients in Iran: A systematic review

    HEMODIALYSIS INTERNATIONAL, Issue 3 2010
    Seyed-Moayed ALAVIAN
    Abstract Hemodialysis (HD) patients are recognized as one of the high-risk groups for hepatitis C virus (HCV) infection. The prevalence of HCV infection varies widely between 5.5% and 24% among different Iranian populations. Preventive programs for reducing HCV infection prevalence in these patients require accurate information. In the present study, we estimated HCV infection prevalence in Iranian HD patients. In this systematic review, we collected all published and unpublished documents related to HCV infection prevalence in Iranian HD patients from April 2001 to March 2008. We selected descriptive/analytic cross-sectional studies/surveys that have sufficiently declared objectives, a proper sampling method with identical and valid measurement instruments for all study subjects, and proper analysis methods regarding sampling design and demographic adjustments. We used a meta-analysis method to calculate nationwide prevalence estimation. Eighteen studies from 12 provinces (consisting 49.02% of the Iranian total population) reported the prevalence of HCV infection in Iranian HD patients. The HCV infection prevalence in Iranian HD patients is 7.61% (95% confidence interval: 6.06,9.16%) with the recombinant immunoblot assay method. Iran is among countries with low HCV infection prevalence in HD patients. [source]


    Simultaneous assessment of DNA ploidy and biomarker expression in paraffin-embedded tissue sections

    HISTOPATHOLOGY, Issue 1 2010
    Stijn J H M Fleskens
    Fleskens S J H M, Takes R P, Otte-Höller I, van Doesburg L, Smeets A, Speel E-J M, Slootweg P J & van der Laak J A W M (2010) Histopathology,57, 14,26 Simultaneous assessment of DNA ploidy and biomarker expression in paraffin-embedded tissue sections Aims:, Aneuploidy is a potential biomarker for predicting progression of premalignancies. Ploidy assessment is mostly performed on nuclei isolated from tissue sections. Ploidy assessment in situ in tissue sections may be a large improvement, enabling selective sampling of nuclei, thus allowing the correlation between ploidy and histology. Existing ploidy analysis methods in sections suffer from limited sensitivity. The aim was to reliably assess ploidy in sections, combined with simultaneous assessment of other markers at the individual cell level. Methods and results:, Ploidy was measured in 22 paraffin-embedded oral premalignancies. The DNA stoichiometric Feulgen procedure was used on isolated nuclei, as well as fluoresence in situ hybridization analysis. In tissue sections, Feulgen was combined with immunohistochemistry for Ki67 proliferation marker, enabling distinction between cycling euploid and aneuploid cells. Aneuploidy was reliably detected in tissue sections (sensitivity 100%, specificity 92%). One section in which aneuploidy was detected was misclassified in isolated nuclei analysis. Sections were also successfully analysed using our model combined with DNA double strand break marker ,-H2AX in fluorescence microscopy, underlining the power of biomarker evaluation on single cells in tissue sections. Conclusions:, The analysis model proposed in this study enables the combined analysis of histology, genotypic and phenotypic information. [source]


    Estimating the number of independent components for functional magnetic resonance imaging data

    HUMAN BRAIN MAPPING, Issue 11 2007
    Yi-Ou Li
    Abstract Multivariate analysis methods such as independent component analysis (ICA) have been applied to the analysis of functional magnetic resonance imaging (fMRI) data to study brain function. Because of the high dimensionality and high noise level of the fMRI data, order selection, i.e., estimation of the number of informative components, is critical to reduce over/underfitting in such methods. Dependence among fMRI data samples in the spatial and temporal domain limits the usefulness of the practical formulations of information-theoretic criteria (ITC) for order selection, since they are based on likelihood of independent and identically distributed (i.i.d.) data samples. To address this issue, we propose a subsampling scheme to obtain a set of effectively i.i.d. samples from the dependent data samples and apply the ITC formulas to the effectively i.i.d. sample set for order selection. We apply the proposed method on the simulated data and show that it significantly improves the accuracy of order selection from dependent data. We also perform order selection on fMRI data from a visuomotor task and show that the proposed method alleviates the over-estimation on the number of brain sources due to the intrinsic smoothness and the smooth preprocessing of fMRI data. We use the software package ICASSO (Himberg et al. [ 2004]: Neuroimage 22:1214,1222) to analyze the independent component (IC) estimates at different orders and show that, when ICA is performed at overestimated orders, the stability of the IC estimates decreases and the estimation of task related brain activations show degradation. Hum Brain Mapp, 2007. © 2007 Wiley-Liss, Inc. [source]


    Classification of upper lateral body shapes for the apparel industry

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 5 2010
    Young Lim Choi
    Abstract The lateral body shape is a critical determiner of the fit of garments. Either visual assessment or statistical analysis methods have been used to classify the lateral body types. These methods are limited to some extent since various anthropometric features inherently coexist and interact in a human body shape. This study aims to develop objective criteria for the classification of upper lateral body shapes integrating visual assessment and statistical analysis. The three-dimensional scan data of 246 women between 18 and 49 years old were visually classified into four lateral body shapes by an expert panel. In addition, the back space and lateral angles extracted from the scan data were employed for further statistical analyses. Multinomial logistic regressions were used to develop logit models for lateral body types. It was concluded that the resulting logit models could classify lateral body types and calculate the probability of a set of body scan data being classified as a certain lateral body type. It is expected that this probability might be a guideline to quantify the characteristics of the lateral body shape in the apparel industry. © 2010 Wiley Periodicals, Inc. [source]