Large Set (large + set)

Distribution by Scientific Domains
Distribution within Chemistry


Selected Abstracts


Learning-based 3D face detection using geometric context

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4-5 2007
Yanwen Guo
Abstract In computer graphics community, face model is one of the most useful entities. The automatic detection of 3D face model has special significance to computer graphics, vision, and human-computer interaction. However, few methods have been dedicated to this task. This paper proposes a machine learning approach for fully automatic 3D face detection. To exploit the facial features, we introduce geometric context, a novel shape descriptor which can compactly encode the distribution of local geometry and can be evaluated efficiently by using a new volume encoding form, named integral volume. Geometric contexts over 3D face offer the rich and discriminative representation of facial shapes and hence are quite suitable to classification. We adopt an AdaBoost learning algorithm to select the most effective geometric context-based classifiers and to combine them into a strong classifier. Given an arbitrary 3D model, our method first identifies the symmetric parts as candidates with a new reflective symmetry detection algorithm. Then uses the learned classifier to judge whether the face part exists. Experiments are performed on a large set of 3D face and non-face models and the results demonstrate high performance of our method. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Discovering data sources in a dynamic Grid environment

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2007
Jürgen Göres
Abstract The successful adaptation of information integration techniques to the requirements of data Grids is essential for the proliferation of Grid technology. In addition to the well-known problems encountered when integrating heterogeneous sources, the dynamic Grid environment introduces new challenges. This paper discusses the problem of data source discovery, i.e. the selection of the most useful data sources for a given information demand out of a possibly very large set of candidates. We introduce the concept of data source utility and emphasize the pivotal role of semantic correspondences or schema matches for utility. Different variants of concrete utility measures used in an advanced Grid data source registry are presented. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Performance of computationally intensive parameter sweep applications on Internet-based Grids of computers: the mapping of molecular potential energy hypersurfaces

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007
S. Reyes
Abstract This work focuses on the use of computational Grids for processing the large set of jobs arising in parameter sweep applications. In particular, we tackle the mapping of molecular potential energy hypersurfaces. For computationally intensive parameter sweep problems, performance models are developed to compare the parallel computation in a multiprocessor system with the computation on an Internet-based Grid of computers. We find that the relative performance of the Grid approach increases with the number of processors, being independent of the number of jobs. The experimental data, obtained using electronic structure calculations, fit the proposed performance expressions accurately. To automate the mapping of potential energy hypersurfaces, an application based on GRID superscalar is developed. It is tested on the prototypical case of the internal dynamics of acetone. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Performance comparison of MPI and OpenMP on shared memory multiprocessors

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2006
Géraud Krawezik
Abstract When using a shared memory multiprocessor, the programmer faces the issue of selecting the portable programming model which will provide the best performance. Even if they restricts their choice to the standard programming environments (MPI and OpenMP), they have to select a programming approach among MPI and the variety of OpenMP programming styles. To help the programmer in their decision, we compare MPI with three OpenMP programming styles (loop level, loop level with large parallel sections, SPMD) using a subset of the NAS benchmark (CG, MG, FT, LU), two dataset sizes (A and B), and two shared memory multiprocessors (IBM SP3 NightHawk II, SGI Origin 3800). We have developed the first SPMD OpenMP version of the NAS benchmark and gathered other OpenMP versions from independent sources (PBN, SDSC and RWCP). Experimental results demonstrate that OpenMP provides competitive performance compared with MPI for a large set of experimental conditions. Not surprisingly, the two best OpenMP versions are those requiring the strongest programming effort. MPI still provides the best performance under some conditions. We present breakdowns of the execution times and measurements of hardware performance counters to explain the performance differences. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Identification of germ plasm-associated transcripts by microarray analysis of Xenopus vegetal cortex RNA

DEVELOPMENTAL DYNAMICS, Issue 6 2010
Tawny N. Cuykendall
Abstract RNA localization is a common mechanism for regulating cell structure and function. Localized RNAs in Xenopus oocytes are critical for early development, including germline specification by the germ plasm. Despite the importance of these localized RNAs, only approximately 25 have been identified and fewer are functionally characterized. Using microarrays, we identified a large set of localized RNAs from the vegetal cortex. Overall, our results indicate a minimum of 275 localized RNAs in oocytes, or 2,3% of maternal transcripts, which are in general agreement with previous findings. We further validated vegetal localization for 24 candidates and further characterized three genes expressed in the germ plasm. We identified novel germ plasm expression for reticulon 3.1, exd2 (a novel exonuclease-domain encoding gene), and a putative noncoding RNA. Further analysis of these and other localized RNAs will likely identify new functions of germ plasm and facilitate the identification of cis -acting RNA localization elements. Developmental Dynamics 239:1838,1848, 2010. © 2010 Wiley-Liss, Inc. [source]


Residence time and potential range: crucial considerations in modelling plant invasions

DIVERSITY AND DISTRIBUTIONS, Issue 1 2007
John R. U. Wilson
ABSTRACT A prime aim of invasion biology is to predict which species will become invasive, but retrospective analyses have so far failed to develop robust generalizations. This is because many biological, environmental, and anthropogenic factors interact to determine the distribution of invasive species. However, in this paper we also argue that many analyses of invasiveness have been flawed by not considering several fundamental issues: (1) the range size of an invasive species depends on how much time it has had to spread (its residence time); (2) the range size and spread rate are mediated by the total extent of suitable (i.e. potentially invasible) habitat; and (3) the range size and spread rate depend on the frequency and intensity of introductions (propagule pressure), the position of founder populations in relation to the potential range, and the spatial distribution of the potential range. We explored these considerations using a large set of invasive alien plant species in South Africa for which accurate distribution data and other relevant information were available. Species introduced earlier and those with larger potential ranges had larger current range sizes, but we found no significant effect of the spatial distribution of potential ranges on current range sizes, and data on propagule pressure were largely unavailable. However, crucially, we showed that: (1) including residence time and potential range always significantly increases the explanatory power of the models; and (2) residence time and potential range can affect which factors emerge as significant determinants of invasiveness. Therefore, analyses not including potential range and residence time can come to misleading conclusions. When these factors were taken into account, we found that nitrogen-fixing plants and plants invading arid regions have spread faster than other species, but these results were phylogenetically constrained. We also show that, when analysed in the context of residence time and potential range, variation in range size among invasive species is implicitly due to variation in spread rates, and, that by explicitly assuming a particular model of spread, it is possible to estimate changes in the rates of plant invasions through time. We believe that invasion biology can develop generalizations that are useful for management, but only in the context of a suitable null model. [source]


Belief-Free Equilibria in Repeated Games

ECONOMETRICA, Issue 2 2005
Jeffrey C. Ely
We introduce a class of strategies that generalizes examples constructed in two-player games under imperfect private monitoring. A sequential equilibrium is belief-free if, after every private history, each player's continuation strategy is optimal independently of his belief about his opponents' private histories. We provide a simple and sharp characterization of equilibrium payoffs using those strategies. While such strategies support a large set of payoffs, they are not rich enough to generate a folk theorem in most games besides the prisoner's dilemma, even when noise vanishes. [source]


The missing dark matter in the wealth of nations and its implications for global imbalances

ECONOMIC POLICY, Issue 51 2007
Ricardo Hausmann
SUMMARY Dark matter and international imbalances Current account statistics may not be good indicators of the evolution of a country's net foreign assets and of its external position's sustainability. The value of existing assets may vary independently of current account flows, so-called ,return privileges' may allow some countries to obtain abnormal returns, and mismeasurement of FDI, unreported trade of insurance or liquidity services, and debt relief may also play a role. We analyse the relevant evidence in a large set of countries and periods, and examine measures of net foreign assets obtained by capitalizing the net investment income and then estimating the current account from the changes in this stock of foreign assets. We call dark matter the difference between our measure of net foreign assets and that measured by official statistics. We find it to be important for many countries, analyse its relationship with theoretically relevant factors, and note that the resulting perspective tends to make global net asset positions appear relatively stable. , Ricardo Hausmann and Federico Sturzenegger [source]


Transcriptional control of the pvdS iron starvation sigma factor gene by the master regulator of sulfur metabolism CysB in Pseudomonas aeruginosa

ENVIRONMENTAL MICROBIOLOGY, Issue 6 2010
Francesco Imperi
Summary In the Gram-negative pathogen Pseudomonas aeruginosa, the alternative sigma factor PvdS acts as a key regulator of the response to iron starvation. PvdS also controls P. aeruginosa virulence, as it drives the expression of a large set of genes primarily implicated in biogenesis and transport of the pyoverdine siderophore and synthesis of extracellular factors, such as protease PrpL and exotoxin A. Besides the ferric uptake regulatory protein Fur, which shuts off pvdS transcription under iron-replete conditions, no additional regulatory factor(s) controlling the pvdS promoter activity have been characterized so far. Here, we used the promoter region of pvdS as bait to tentatively capture, by DNA-protein affinity purification, P. aeruginosa proteins that are able to bind specifically to the pvdS promoter. This led to the identification and functional characterization of the LysR-like transcription factor CysB as a novel regulator of pvdS transcription. The CysB protein directly binds to the pvdS promoter in vitro and acts as a positive regulator of PvdS expression in vivo. The absence of a functional CysB protein results in about 50% reduction of expression of PvdS-dependent virulence phenotypes. Given the role of CysB as master regulator of sulfur metabolism, our findings establish a novel molecular link between the iron and sulfur regulons in P. aeruginosa. [source]


Statistical tests and power analysis for three in-vivo bioassays to determine the quality of marine sediments

ENVIRONMETRICS, Issue 3 2002
Nelly van der Hoeven
Abstract Statistical tests are recommended for three marine sediment in-vivo bioassays. In two bioassays (Corophium volutator and Echinocardium cordatum), the mortality in the sediment is compared with that in a control. An unconditional 2,×,2 test is recommended. For one bioassay (Rotoxkit MTM with Brachionus plicatilis), mortality in a dilution series of pore water is compared with the mortality in a control. The Williams test for trends is recommended. For each of these tests the power to assess an effect has been calculated. The number of replicates recommended in the standardized test protocol only allows large effects to be observed in almost all (95 per cent) of the experiments. Given the control mortality rates estimated from a large set of controls, a power of 95 per cent will only be reached if the mortality rate in the tested sediment is over 30 per cent for C. volutator and almost 60 per cent for E. cordatum. To reach this power for bioassays with B. plicatilis, where five concentrations are compared with a control, the mortality rate in the lowest effect concentration should be about 35 per cent. As an alternative to no effect testing, it is suggested that whether the effect of a treatment remains below some chosen minimal relevant effect (MRE) should be tested. Given an MRE at a fixed mortality rate of 25 per cent and ,,=,0.05, at least 55 individuals are necessary to be reasonably sure (95 per cent) that a mortality of 10 per cent will not be declared toxic incorrectly. The tests for mortality are based on the assumption that the survival probabilities of individuals within a test vessel are independent. We have described a method to test this assumption and applied it to the data on C. volutator. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Is biofuel policy harming biodiversity in Europe?

GCB BIOENERGY, Issue 1 2009
JEANNETTE EGGERS
Abstract We assessed the potential impacts of land-use changes resulting from a change in the current biofuel policy on biodiversity in Europe. We evaluated the possible impact of both arable and woody biofuel crops on changes in distribution of 313 species pertaining to different taxonomic groups. Using species-specific information on habitat suitability as well as land use simulations for three different biofuel policy options, we downscaled available species distribution data from the original resolution of 50 to 1 km. The downscaled maps were then applied to analyse potential changes in habitat size and species composition at different spatial levels. Our results indicate that more species might suffer from habitat losses rather than benefit from a doubled biofuel target, while abolishing the biofuel target would mainly have positive effects. However, the possible impacts vary spatially and depend on the biofuel crop choice, with woody crops being less detrimental than arable crops. Our results give an indication for policy and decision makers of what might happen to biodiversity under a changed biofuel policy in the European Union. The presented approach is considered to be innovative as to date no comparable policy impact assessment has been applied to such a large set of key species at the European scale. [source]


Novel regions of acquired uniparental disomy discovered in acute myeloid leukemia

GENES, CHROMOSOMES AND CANCER, Issue 9 2008
Manu Gupta
The acquisition of uniparental disomy (aUPD) in acute myeloid leukemia (AML) results in homozygosity for known gene mutations. Uncovering novel regions of aUPD has the potential to identify previously unknown mutational targets. We therefore aimed to develop a map of the regions of aUPD in AML. Here, we have analyzed a large set of diagnostic AML samples (n = 454) from young adults (age: 15,55 years) using genotype arrays. Acquired UPD was found in 17% of the samples with a nonrandom distribution particularly affecting chromosome arms 13q, 11p, and 11q. Novel recurrent regions of aUPD were uncovered at 2p, 17p, 2q, 17q, 1p, and Xq. Overall, aUPDs were observed across all cytogenetic risk groups, although samples with aUPD13q (5.4% of samples) belonged exclusively to the intermediate-risk group as defined by cytogenetics. All cases with a high FLT3 -ITD level, measured previously, had aUPD13q covering the FLT3 gene. Significantly, none of the samples with FLT3 -ITD - /FLT3 -TKD+ mutation exhibited aUPD13q. Of the 119 aUPDs observed, the majority (87%) were due to mitotic recombination while only 13% were due to nondisjunction. This study demonstrates aUPD is a frequent and significant finding in AML and pinpoints regions that may contain novel mutational targets. © 2008 Wiley-Liss, Inc. [source]


Genome-wide association analyses of quantitative traits: the GAW16 experience

GENETIC EPIDEMIOLOGY, Issue S1 2009
Saurabh GhoshArticle first published online: 18 NOV 200
Abstract The group that formed on the theme of genome-wide association analyses of quantitative traits (Group 2) in the Genetic Analysis Workshop 16 comprised eight sets of investigators. Three data sets were available: one on autoantibodies related to rheumatoid arthritis provided by the North American Rheumatoid Arthritis Consortium; the second on anthropometric, lipid, and biochemical measures provided by the Framingham Heart Study (FHS); and the third a simulated data set modeled after FHS. The different investigators in the group addressed a large set of statistical challenges and applied a wide spectrum of association methods in analyzing quantitative traits at the genome-wide level. While some previously reported genes were validated, some novel chromosomal regions provided significant evidence of association in multiple contributions in the group. In this report, we discuss the different strategies explored by the different investigators with the common goal of improving the power to detect association. Genet. Epidemiol. 33 (Suppl. 1):S13,S18, 2009. © 2009 Wiley-Liss, Inc. [source]


Locational Equilibria in Weberian Agglomeration

GEOGRAPHICAL ANALYSIS, Issue 4 2008
Dean M. Hanink
A simple Weberian agglomeration is developed and then extended as an innovative fixed-charged, colocation model over a large set of locational possibilities. The model is applied to cases in which external economies (EE) arise due to colocation alone and also cases in which EE arise due to city size. Solutions to the model are interpreted in the context of contemporary equilibrium analysis, which allows Weberian agglomeration to be interpreted in a more general way than in previous analyses. Within that context, the Nash points and Pareto efficient points in the location patterns derived in the model are shown to rarely coincide. The applications consider agglomeration from two perspectives: one is the colocation behavior of producers as the agents of agglomeration and the other is the interaction between government and those agents in the interest of agglomeration policy. Extending the analysis to games, potential Pareto efficiency and Hicks optimality are considered with respect to side payments between producers and with respect to appropriate government incentives toward agglomeration. [source]


Common birds facing global changes: what makes a species at risk?

GLOBAL CHANGE BIOLOGY, Issue 1 2004
Romain Julliard
Abstract Climate change, habitat degradation, and direct exploitation are thought to threaten biodiversity. But what makes some species more sensitive to global change than others? Approaches to this question have relied on comparing the fate of contrasting groups of species. However, if some ecological parameter affects the fate of species faced with global change, species response should vary smoothly along the parameter gradient. Thus, grouping species into few, often two, discrete classes weakens the approach. Using data from the common breeding bird survey in France , a large set of species with much variability with respect to the variables considered , we show that a quantitative measure of habitat specialization and of latitudinal distribution both predict recent 13 year trends of population abundance among 77 terrestrial species: the more northerly distributed and the more specialized a species is, the sharper its decline. On the other hand, neither hunting status, migrating strategy nor body mass predicted population growth rate variation among common bird species. Overall, these results are qualitatively very similar to the equivalent relationships found among the British butterfly populations. This constitutes additional evidence that biodiversity in Western Europe is under the double negative influence of climate change and land use change. [source]


Competitive Hebbian learning and the hippocampal place cell system: Modeling the interaction of visual and path integration cues

HIPPOCAMPUS, Issue 3 2001
Alex Guazzelli
Abstract The hippocampus has long been thought essential for implementing a cognitive map of the environment. However, almost 30 years since place cells were found in rodent hippocampal field CA1, it is still unclear how such an allocentric representation arises from an egocentrically perceived world. By means of a competitive Hebbian learning rule responsible for coding visual and path integration cues, our model is able to explain the diversity of place cell responses observed in a large set of electrophysiological experiments with a single fixed set of parameters. Experiments included changes observed in place fields due to exploration of a new environment, darkness, retrosplenial cortex inactivation, and removal, rotation, and permutation of landmarks. To code for visual cues for each landmark, we defined two perceptual schemas representing landmark bearing and distance information over a linear array of cells. The information conveyed by the perceptual schemas is further processed through a network of adaptive layers which ultimately modulate the resulting activity of our simulated place cells. In path integration terms, our system is able to dynamically remap a bump of activity coding for the displacement of the animal in relation to an environmental anchor. We hypothesize that path integration information is computed in the rodent posterior parietal cortex and conveyed to the hippocampus where, together with visual information, it modulates place cell activity. The resulting network yields a more direct treatment of partial remapping of place fields than other models. In so doing, it makes new predictions regarding the nature of the interaction between visual and path integration cues during new learning and when the system is challenged with environmental changes. Hippocampus 2001;11:216,239. © 2001 Wiley-Liss, Inc. [source]


Molecular screening of 980 cases of suspected hereditary optic neuropathy with a report on 77 novel OPA1 mutations,

HUMAN MUTATION, Issue 7 2009
Marc Ferré
Abstract We report the results of molecular screening in 980 patients carried out as part of their work-up for suspected hereditary optic neuropathies. All the patients were investigated for Leber's hereditary optic neuropathy (LHON) and autosomal dominant optic atrophy (ADOA), by searching for the ten primary LHON-causing mtDNA mutations and examining the entire coding sequences of the OPA1 and OPA3 genes, the two genes currently identified in ADOA. Molecular defects were identified in 440 patients (45% of screened patients). Among these, 295 patients (67%) had an OPA1 mutation, 131 patients (30%) had an mtDNA mutation, and 14 patients (3%), belonging to three unrelated families, had an OPA3 mutation. Interestingly, OPA1 mutations were found in 157 (40%) of the 392 apparently sporadic cases of optic atrophy. The eOPA1 locus-specific database now contains a total of 204 OPA1 mutations, including 77 novel OPA1 mutations reported here. The statistical analysis of this large set of mutations has led us to propose a diagnostic strategy that should help with the molecular work-up of optic neuropathies. Our results highlight the importance of investigating LHON-causing mtDNA mutations as well as OPA1 and OPA3 mutations in cases of suspected hereditary optic neuropathy, even in absence of a family history of the disease. © 2009 Wiley-Liss, Inc. [source]


Spatial clustering of childhood cancer in Great Britain during the period 1969,1993

INTERNATIONAL JOURNAL OF CANCER, Issue 4 2009
Richard J.Q. McNally
Abstract The aetiology of childhood cancer is poorly understood. Both genetic and environmental factors are likely to be involved. The presence of spatial clustering is indicative of a very localized environmental component to aetiology. Spatial clustering is present when there are a small number of areas with greatly increased incidence or a large number of areas with moderately increased incidence. To determine whether localized environmental factors may play a part in childhood cancer aetiology, we analyzed for spatial clustering using a large set of national population-based data from Great Britain diagnosed 1969,1993. The Potthoff-Whittinghill method was used to test for extra-Poisson variation (EPV). Thirty-two thousand three hundred and twenty-three cases were allocated to 10,444 wards using diagnosis addresses. Analyses showed statistically significant evidence of clustering for acute lymphoblastic leukaemia (ALL) over the whole age range (estimate of EPV = 0.05, p = 0.002) and for ages 1,4 years (estimate of EPV = 0.03, p = 0.015). Soft-tissue sarcoma (estimate of EPV = 0.03, p = 0.04) and Wilms tumours (estimate of EPV = 0.04, p = 0.007) also showed significant clustering. Clustering tended to persist across different time periods for cases of ALL (estimate of between-time period EPV = 0.04, p =0.003). In conclusion, we observed low level spatial clustering that is attributable to a limited number of cases. This suggests that environmental factors, which in some locations display localized clustering, may be important aetiological agents in these diseases. For ALL and soft tissue sarcoma, but not Wilms tumour, common infectious agents may be likely candidates. © 2008 Wiley-Liss, Inc. [source]


Suppression of the TIG3 tumor suppressor gene in human ovarian carcinomas is mediated via mitogen-activated kinase-dependent and -independent mechanisms

INTERNATIONAL JOURNAL OF CANCER, Issue 6 2005
Kristina Lotz
Abstract The TIG3 gene is a retinoic acid inducible class II tumor suppressor gene downregulated in several human tumors and malignant cell lines. Diminished TIG3 expression correlates with decreased differentiation whereas forced expression of TIG3 suppresses oncogenic signaling pathways and subsequently induces differentiation or apoptosis in tumor cells. Analysis of TIG3 mRNA expression in a large set of cDNA pools derived from matched tumor and normal human tissues showed a significant downregulation of TIG3 in 29% of the cDNA samples obtained from ovarian carcinomas. Using in situ hybridization, we demonstrated expression of TIG3 in the epithelial lining of 7 normal ovaries but loss of TIG3 expression in 15/19 of human ovarian carcinoma tissues. In SKOV-3, CAOV-3 and ES-2 ovarian carcinoma cell lines, downregulation of TIG3 mRNA was reversible and dependent on an activated MEK-ERK signaling pathway. Re-expression of TIG3 mRNA in these cells upon specific interference with the MEK-pathway was correlated with growth inhibition of the cells. In OVCAR-3 and A27/80 ovarian carcinoma cells, TIG3 suppression is MEK-ERK independent, but expression could be reconstituted upon interferon gamma (IFN,) induction. Overexpression of TIG3 in A27/80 ovarian carcinoma cells significantly impaired cell growth and despite increased mRNA levels, TIG3 protein was hardly detectable. These results suggest that TIG3 is negatively regulated by an activated MEK-ERK signaling pathway. Further mechanisms must interfere with TIG3 expression that are independent of MEK and partially include interferon-responsive components. © 2005 Wiley-Liss, Inc. [source]


A high proportion of founder BRCA1 mutations in Polish breast cancer families

INTERNATIONAL JOURNAL OF CANCER, Issue 5 2004
Bohdan Górski
Abstract Three mutations in BRCA1 (5382insC, C61G and 4153delA) are common in Poland and account for the majority of mutations identified to date in Polish breast and breast,ovarian cancer families. It is not known, however, to what extent these 3 founder mutations account for all of the BRCA mutations distributed throughout the country. This question has important implications for health policy and the design of epidemiologic studies. To establish the relative contributions of founder and nonfounder BRCA mutations, we established the entire spectrum of BRCA1 and BRCA2 mutations in a large set of breast,ovarian cancer families with origins in all regions of Poland. We sequenced the entire coding regions of the BRCA1 and BRCA2 genes in 100 Polish families with 3 or more cases of breast cancer and in 100 families with cases of both breast and ovarian cancer. A mutation in BRCA1 or BRCA2 was detected in 66% of breast cancer families and in 63% of breast,ovarian cancer families. Of 129 mutations, 122 (94.6%) were in BRCA1 and 7 (5.4%) were in BRCA2. Of the 122 families with BRCA1 mutations, 119 (97.5%) had a recurrent mutation (i.e., one that was seen in at least 2 families). In particular, 111 families (91.0%) carried one of the 3 common founder mutations. The mutation spectrum was not different between families with and without ovarian cancer. These findings suggest that a rapid and inexpensive assay directed at identifying the 3 common founder mutations will have a sensitivity of 86% compared to a much more costly and labor-intensive full-sequence analysis of both genes. This rapid test will facilitate large-scale national epidemiologic and clinical studies of hereditary breast cancer, potentially including studies of chemoprevention. © 2004 Wiley-Liss, Inc. [source]


Recalibration methods to enhance information on prevalence rates from large mental health surveys

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 1 2005
N. A. Taub
Abstract Comparisons between self-report and clinical psychiatric measures have revealed considerable disagreement. It is unsafe to consider these measures as directly equivalent, so it would be valuable to have a reliable recalibration of one measure in terms of the other. We evaluated multiple imputation incorporating a Bayesian approach, and a fully Bayesian method, to recalibrate diagnoses from a self-report survey interview in terms of those from a clinical interview with data from a two-phase national household survey for a practical application, and artificial data for simulation studies. The most important factors in obtaining a precise and accurate ,clinical' prevalence estimate from self-report data were (a) good agreement between the two diagnostic measures and (b) a sufficiently large set of calibration data with diagnoses based on both kinds of interview from the same group of subjects. From the case study, calibration data on 612 subjects were sufficient to yield estimates of the total prevalence of anxiety, depression or neurosis with a precision in the region of ±2%. The limitations of the calibration method demonstrate the need to increase agreement between survey and reference measures by improving lay interviews and their diagnostic algorithms. Copyright © 2005 Whurr Publishers Ltd. [source]


Focused principal component analysis: a promising approach for confirming findings of exploratory analysis?

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 4 2001
B. Falissard
Abstract In many psychiatric studies, the objective is to describe and understand relationships between a large set of quantitative variables, with a particular interest in the relationship between one variable (often regarded as a response) and the others (often regarded as explanatory). This paper describes a new method to apply in such situations. It is based on principal components analysis (PCA). Like this technique, it conveys the structure of a correlation matrix into a low-dimensional diagram but, unlike PCA, it makes it possible to represent accurately the correlations of a given variable with the other variables (and even to test graphically the hypothesis that one of these correlations is equal to zero). Two examples in the field of psychiatry research are provided. Copyright © 2001 Whurr Publishers Ltd. [source]


A tree-based approach to matchmaking algorithms for resource discovery

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 5 2008
Md. Rafiqul Islam
One of the essential operations in a distributed computing is resource discovery. A resource discovery service provides mechanisms to identify the set of resources capable of satisfying the requirements of a job from a large collection of resources. The matchmaking framework provides a reasonable solution to resource management in a distributed environment; it is composed of four important components as classified advertisement (classad), matchmaker protocol, matchmaking algorithm and claiming protocols. Most of the time required to find a resource depends on the performance of the matchmaking algorithms. A distributed environment introduces a large set of heterogeneous resources which is always changing. The matchmaking algorithms should incorporate with this highly changing environment. In this paper we proposed a fast and efficient searching method for matchmaking algorithms which also deals with resource heterogeneity. The proposed approach reduces the searching time to a linear function from a cubic function proposed by R. Raman, M. Livny, and M. Solomon. We discuss briefly the working principles of the method and compare the experimental results of the proposed matchmaking algorithm with those of the existing algorithm. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Soft Coulomb hole method applied to molecules

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 5 2007
J. Ortega-Castro
Abstract The soft Coulomb hole method introduces a perturbation operator, defined by ,e/r12 to take into account electron correlation effects, where , represents the width of the Coulomb hole. A new parametrization for the soft Coulomb hole operator is presented with the purpose of obtaining better molecular geometries than those resulting from Hartree,Fock calculations, as well as correlation energies. The 12 parameters included in , were determined for a reference set of 12 molecules and applied to a large set of molecules (38 homo- and heteronuclear diatomic molecules, and 37 small and medium-size molecules). For these systems, the optimized geometries were compared with experimental values; correlation energies were compared with results of the MP2, B3LYP, and Gaussian 3 approach. On average, molecular geometries are better than the Hartree,Fock values, and correlation energies yield results halfway between MP2 and B3LYP. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


Model density approach to the Kohn,Sham problem: Efficient extension of the density fitting technique

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 5 2005
Uwe Birkenheuer
Abstract We present a novel procedure for treating the exchange-correlation contributions in the Kohn,Sham procedure. The approach proposed is fully variational and closely related to the so-called "fitting functions" method for the Coulomb Hartree problem; in fact, the method consistently uses this auxiliary representation of the electron density to determine the exchange-correlation contributions. The exchange-correlation potential and its matrix elements in a basis set of localized (atomic) orbitals can be evaluated by reusing the three-center Coulomb integrals involving fitting functions, while the computational cost of the remaining numerical integration is significantly reduced and scales only linearly with the size of the auxiliary basis. We tested the approach extensively for a large set of atoms and small molecules as well as for transition-metal carbonyls and clusters, by comparing total energies, atomization energies, structure parameters, and vibrational frequencies at the local density approximation and generalized gradient approximation levels of theory. The method requires a sufficiently flexible auxiliary basis set. We propose a minimal extension of the conventional auxiliary basis set, which yields essentially the same accuracy for the quantities just mentioned as the standard approach. The new method allows one to achieve substantial savings compared with a fully numerical integration of the exchange-correlation contributions. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 [source]


Performance evaluation of TCP-based applications over DVB-RCS DAMA schemes

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 3 2009
M. Luglio
Abstract Transmission Control Protocol (TCP) performance over Digital Video Broadcasting,Return Channel via Satellite (DVB-RCS) standard is greatly affected by the total delay, which is mainly due to two components, propagation delay and access delay. Both are significant because they are dependent on the long propagation path of the satellite link. The former is intrinsic and due to radio wave propagation over the satellite channel for both TCP packets and acknowledgements. It is regulated by the control loop that governs TCP. The latter is due to the control loop that governs the demand assignment multiple access (DAMA) signalling exchange between satellite terminals and the network control center, necessary to manage return link resources. DAMA is adopted in DVB-RCS standard to achieve flexible and efficient use of the shared resources. Therefore, performance of TCP over DVB-RCS may degrade due to the exploitation of two nested control loops also depending on both the selected DAMA algorithm and the traffic profile. This paper analyses the impact of basic DAMA implementation on TCP-based applications over a DVB-RCS link for a large set of study cases. To provide a detailed overview of TCP performance in DVB-RCS environment, the analysis includes both theoretical approach and simulation campaign. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Tight bounds for the identical parallel machine-scheduling problem: Part II

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2008
Mohamed Haouari
Abstract A companion paper introduces new lower bounds and heuristics for the problem of minimizing makespan on identical parallel machines. The objective of this paper is threefold. First, we describe further enhancements of previously described lower bounds. Second, we propose a new heuristic that requires solving a sequence of 0,1 knapsack problems. Finally, we show that embedding these newly derived bounds in a branch-and-bound procedure yields a very effective exact algorithm. Moreover, this algorithm features a new symmetry-breaking branching strategy. We present the results of computational experiments that were carried out on a large set of instances and that attest to the efficacy of the proposed algorithm. In particular, we report proven optimal solutions for some benchmark problems that have been open for some time. [source]


Preparing a large data set for analysis: using the Minimum Data Set to study perineal dermatitis

JOURNAL OF ADVANCED NURSING, Issue 4 2005
Kay Savik MS
Aim., The aim of this paper is to present a practical example of preparing a large set of Minimum Data Set records for analysis, operationalizing Minimum Data Set items that defined risk factors for perineal dermatitis, our outcome variable. Background., Research with nursing home elders remains a vital need as ,baby boomers' age. Conducting research in nursing homes is a daunting task. The Minimum Data Set is a standardized instrument used to assess many aspects of a nursing home resident's functional capability. United States Federal Regulations require a Minimum Data Set assessment of all nursing home residents. These large data would be a useful resource for research studies, but need to be extensively refined for use in most statistical analyses. Although fairly comprehensive, the Minimum Data Set does not provide direct measures of all clinical outcomes and variables of interest. Method., Perineal dermatitis is not directly measured in the Minimum Data Set. Additional information from prescribers' (physician and nurse) orders was used to identify cases of perineal dermatitis. The following steps were followed to produce Minimum Data Set records appropriate for analysis: (1) identification of a subset of Minimum Data Set records specific to the research, (2) identification of perineal dermatitis cases from the prescribers' orders, (3) merging of the perineal dermatitis cases with the Minimum Data Set data set, (4) identification of Minimum Data Set items used to operationalize the variables in our model of perineal dermatitis, (5) determination of the appropriate way to aggregate individual Minimum Data Set items into composite measures of the variables, (6) refinement of these composites using item analysis and (7) assessment of the distribution of the composite variables and need for transformations to use in statistical analysis. Results., Cases of perineal dermatitis were successfully identified and composites were created that operationalized a model of perineal dermatitis. Conclusion., Following these steps resulted in a data set where data analysis could be pursued with confidence. Incorporating other sources of data, such as prescribers' orders, extends the usefulness of the Minimum Data Set for research use. [source]


On the relation between temporal variability and persistence time in animal populations

JOURNAL OF ANIMAL ECOLOGY, Issue 6 2003
Pablo Inchausti
Summary 1The relationship between temporal variability, spectral redness and population persistence for a large number of long-term time series was investigated. Although both intuition and theory suggest that more variability in population abundance would mean greater probability of extinction, previous empirical support for this view has not been conclusive. Possible reasons are the shortage of long-term data and the difficulties of adequately characterizing temporal variability, two issues that are explicitly addressed in this paper. 2We examined the relationship between population variability and quasi-extinction time (measured as the time required to observe a 90% decline of population abundance) for a large set of data comprising 554 populations for 123 species that were censused for more than 30 years. Two aspects of temporal variability were considered in relation with the quasi-extinction time: a baseline value (coefficient of variation over a fixed, 30-year, time scale), and a measure of the rate of increase of the population variability over time (spectral exponent). 3The results show that the quasi-extinction time was shorter for populations having higher temporal variability and redder dynamics. The relation between persistence time and population variability was compared for different taxa, trophic levels, habitat type (aquatic and terrestrial) and body sizes and compared with theoretical expectations. [source]


Advances in powder diffraction pattern indexing: N-TREOR09

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 5 2009
Angela Altomare
Powder pattern indexing can still be a challenge, despite the great recent advances in theoretical approaches, computer speed and experimental devices. More plausible unit cells, belonging to different crystal systems, are frequently found by the indexing programs, and recognition of the correct one may not be trivial. The task is, however, of extreme importance: in case of failure a lot of effort and computing time may be wasted. The classical figures of merit for estimating the unit-cell reliability {i.e.M20 [de Wolff (1968). J. Appl. Cryst.1, 108,113] and FN [Smith & Snyder (1979). J. Appl. Cryst.12, 60,65]} sometimes fail. For this reason, a new figure of merit has been introduced in N-TREOR09, the updated version of the indexing package N-TREOR [Altomare, Giacovazzo, Guagliardi, Moliterni, Rizzi & Werner (2000). J. Appl. Cryst. 33, 1180,1186], combining the information supplied by M20 with additional parameters such as the number of unindexed lines, the degree of overlap in the pattern (the so-called number of statistically independent observations), the symmetry deriving from the automatic evaluation of the extinction group, and the agreement between the calculated and observed profiles. The use of the new parameters requires a dramatic modification of the procedures used worldwide: in the approach presented here, extinction symbol and unit-cell determination are simultaneously estimated. N-TREOR09 benefits also from an improved indexing procedure in the triclinic system and has been integrated into EXPO2009, the updated version of EXPO2004 [Altomare, Caliandro, Camalli, Cuocci, Giacovazzo, Moliterni & Rizzi (2004). J. Appl. Cryst. 37, 1025,1028]. The application of the new procedure to a large set of test structures is described. [source]