Comprehensive Set (comprehensive + set)

Distribution by Scientific Domains


Selected Abstracts


A comparison of using Taverna and BPEL in building scientific workflows: the case of caGrid

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2010
Wei Tan
Abstract When the emergence of ,service-oriented science,' the need arises to orchestrate multiple services to facilitate scientific investigation,that is, to create ,science workflows.' We present here our findings in providing a workflow solution for the caGrid service-based grid infrastructure. We choose BPEL and Taverna as candidates, and compare their usability in the lifecycle of a scientific workflow, including workflow composition, execution, and result analysis. Our experience shows that BPEL as an imperative language offers a comprehensive set of modeling primitives for workflows of all flavors; whereas Taverna offers a dataflow model and a more compact set of primitives that facilitates dataflow modeling and pipelined execution. We hope that this comparison study not only helps researchers to select a language or tool that meets their specific needs, but also offers some insight into how a workflow language and tool can fulfill the requirement of the scientific community. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Specification, planning, and execution of QoS-aware Grid workflows within the Amadeus environment

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2008
Ivona Brandic
Abstract Commonly, at a high level of abstraction Grid applications are specified based on the workflow paradigm. However, majority of Grid workflow systems either do not support Quality of Service (QoS), or provide only partial QoS support for certain phases of the workflow lifecycle. In this paper we present Amadeus, which is a holistic service-oriented environment for QoS-aware Grid workflows. Amadeus considers user requirements, in terms of QoS constraints, during workflow specification, planning, and execution. Within the Amadeus environment workflows and the associated QoS constraints are specified at a high level using an intuitive graphical notation. A distinguishing feature of our system is the support of a comprehensive set of QoS requirements, which considers in addition to performance and economical aspects also legal and security aspects. A set of QoS-aware service-oriented components is provided for workflow planning to support automatic constraint-based service negotiation and workflow optimization. For improving the efficiency of workflow planning we introduce a QoS-aware workflow reduction technique. Furthermore, we present our static and dynamic planning strategies for workflow execution in accordance with user-specified requirements. For each phase of the workflow lifecycle we experimentally evaluate the corresponding Amadeus components. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Novel methods improve prediction of species' distributions from occurrence data

ECOGRAPHY, Issue 2 2006
Jane Elith
Prediction of species' distributions is central to diverse applications in ecology, evolution and conservation science. There is increasing electronic access to vast sets of occurrence records in museums and herbaria, yet little effective guidance on how best to use this information in the context of numerous approaches for modelling distributions. To meet this need, we compared 16 modelling methods over 226 species from 6 regions of the world, creating the most comprehensive set of model comparisons to date. We used presence-only data to fit models, and independent presence-absence data to evaluate the predictions. Along with well-established modelling methods such as generalised additive models and GARP and BIOCLIM, we explored methods that either have been developed recently or have rarely been applied to modelling species' distributions. These include machine-learning methods and community models, both of which have features that may make them particularly well suited to noisy or sparse information, as is typical of species' occurrence data. Presence-only data were effective for modelling species' distributions for many species and regions. The novel methods consistently outperformed more established methods. The results of our analysis are promising for the use of data from museums and herbaria, especially as methods suited to the noise inherent in such data improve. [source]


Differential impact of state tobacco control policies among race and ethnic groups

ADDICTION, Issue 2007
John A. Tauras
ABSTRACT Aims This paper describes patterns of racial and ethnic cigarette use in the United States and discusses changes in state-level tobacco control policies. Moreover, this paper reviews the existing econometric literature on racial and ethnic smoking and discusses the limitations of that research. Finally, this paper outlines an agenda for future research. Methods Patterns of racial and ethnic smoking and changes in state-level tobacco control policies in the United States were obtained from a variety of sources, including surveys and government and private documents and databases. After an extensive literature search was completed, the existing research was scrutinized and recommendations for much-needed future research were put forth. Findings Despite the fact that certain racial and ethnic minorities bear a disproportionate share of the overall health burden of tobacco, less than a handful of econometric studies have examined the effects of state-level public policies on racial and ethnic smoking. The existing literature finds Hispanics and African Americans to be more responsive to changes in cigarette prices than whites. Only one study examined other state-level tobacco policies. The findings from that study implied that adolescent white male smoking was responsive to changes in smoke-free air laws, while adolescent black smoking was responsive to changes in youth access laws. Conclusions While much has been learned from prior econometric studies on racial and ethnic smoking in the United States, the existing literature suffers from numerous limitations that should be addressed in future research. Additional research that focuses on races and ethnicities other than white, black and Hispanic is warranted. Furthermore, future studies should use more recent data, hold sentiment toward tobacco constant and control for a comprehensive set of tobacco policies that take into account not only the presence of the laws, but also the level of restrictiveness of each policy. [source]


EXTINCTION DURING EVOLUTIONARY RADIATIONS: RECONCILING THE FOSSIL RECORD WITH MOLECULAR PHYLOGENIES

EVOLUTION, Issue 12 2009
Tiago B. Quental
Recent application of time-varying birth,death models to molecular phylogenies suggests that a decreasing diversification rate can only be observed if there was a decreasing speciation rate coupled with extremely low or no extinction. However, from a paleontological perspective, zero extinction rates during evolutionary radiations seem unlikely. Here, with a more comprehensive set of computer simulations, we show that substantial extinction can occur without erasing the signal of decreasing diversification rate in a molecular phylogeny. We also find, in agreement with the previous work, that a decrease in diversification rate cannot be observed in a molecular phylogeny with an increasing extinction rate alone. Further, we find that the ability to observe decreasing diversification rates in molecular phylogenies is controlled (in part) by the ratio of the initial speciation rate (Lambda) to the extinction rate (Mu) at equilibrium (the LiMe ratio), and not by their absolute values. Here we show in principle, how estimates of initial speciation rates may be calculated using both the fossil record and the shape of lineage through time plots derived from molecular phylogenies. This is important because the fossil record provides more reliable estimates of equilibrium extinction rates than initial speciation rates. [source]


Impact evaluation of India's ,Yeshasvini' community-based health insurance programme

HEALTH ECONOMICS, Issue S1 2010
Aradhna Aggarwal
Abstract Using propensity score matching techniques, the study evaluates the impact of India's Yeshasvini community-based health insurance programme on health-care utilisation, financial protection, treatment outcomes and economic well-being. The programme offers free out-patient diagnosis and lab tests at discounted rates when ill, but, more importantly, it covers highly catastrophic and less discretionary in-patient surgical procedures. For its impact evaluation, 4109 randomly selected households in villages in rural Karnataka, an Indian state, were interviewed using a structured questionnaire. A comprehensive set of indicators was developed and the quality of matching was tested. Generally, the programme is found to have increased utilisation of health-care services, reduced out-of-pocket spending, and ensured better health and economic outcomes. More specifically, however, these effects vary across socio-economic groups and medical episodes. The programme operates by bringing the direct price of health-care down but the extent to which this effectively occurs across medical episodes is an empirical issue. Further, the effects are more pronounced for the better-off households. The article demonstrates that community insurance presents a workable model for providing high-end services in resource-poor settings through an emphasis on accountability and local management. Copyright © 2010 John Wiley & Sons, Ltd. [source]


The impact of quality on the demand for outpatient services in Cyprus

HEALTH ECONOMICS, Issue 12 2004
Kara Hanson
Abstract Health policy reforms in a number of countries seek to improve provider quality by sharpening the incentives they face, for example by exposing them to greater competition. For this to succeed, patients must be responsive to quality in their choice of provider. This paper uses data from Cyprus to estimate the effect of quality on patients' choice between public and private outpatient care. It improves on the existing literature by using a more comprehensive set of quality attributes which allows the dimensions of quality that have the largest effect on patient choice of provider to be identified. We also introduce an innovative way of measuring patients' perceptions of quality in a household survey. We find that patients' choice of provider is sensitive to quality, and that interpersonal quality is more important than either technical quality or system-related factors. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Explaining employee turnover in an Asian context

HUMAN RESOURCE MANAGEMENT JOURNAL, Issue 1 2001
Naresh Khatri
Employee turnover is giving sleepless nights to HR managers in many countries in Asia. A widely-held belief in these countries is that employees have developed ,bad' attitudes due to the labour shortage. Employees are believed to job-hop for no reason, or even for fun. Unfortunately, despite employee turnover being such a serious problem in Asia, there is a dearth of studies investigating it; in particular studies using a comprehensive set of variables are rare. This study examines three sets of antecedents of turnover intention in companies in Singapore: demographic, controllable and uncontrollable. Singapore companies provide an appropriate setting as their turnover rates are among the highest in Asia. Findings of the study suggest that organisational commitment, procedural justice and a job-hopping attitude were three main factors associated with turnover intention in Singapore companies. [source]


The predictive value of different infant attachment measures for socioemotional development at age 5 years,

INFANT MENTAL HEALTH JOURNAL, Issue 4 2009
Sanny Smeekens
The predictive value of different infant attachment measures was examined in a community-based sample of 111 healthy children (59 boys, 52 girls). Two procedures to assess infant attachment, the Attachment Q-Set (applied on a relatively short observation period) and a shortened version of the Strange Situation Procedure (SSSP), were applied to the children at age 15 months and related to a comprehensive set of indicators of the children's socioemotional development at age 5 years. Three attachment measures were used as predictors: AQS security, SSSP security, and SSSP attachment disorganization. AQS security and SSSP security jointly predicted the security of the children's attachment representation at age 5. Apart from that, SSSP attachment disorganization was a better predictor of the children's later socioemotional development than were the other two early attachment measures. First, attachment disorganization was the only attachment measure to predict the children's later ego-resiliency, school adjustment, and dissociation. Second, as for the socioemotional measures at age 5 that also were related to AQS or SSSP security (i.e., peer social competence and externalizing problems), the attachment security measures did not explain any extra variance beyond what was explained by attachment disorganization. [source]


F-bar-based linear triangles and tetrahedra for finite strain analysis of nearly incompressible solids.

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2005
Part I: formulation, benchmarking
Abstract This paper proposes a new technique which allows the use of simplex finite elements (linear triangles in 2D and linear tetrahedra in 3D) in the large strain analysis of nearly incompressible solids. The new technique extends the F-bar method proposed by de Souza Neto et al. (Int. J. Solids and Struct. 1996; 33: 3277,3296) and is conceptually very simple: It relies on the enforcement of (near-) incompressibility over a patch of simplex elements (rather than the point-wise enforcement of conventional displacement-based finite elements). Within the framework of the F-bar method, this is achieved by assuming, for each element of a mesh, a modified (F-bar) deformation gradient whose volumetric component is defined as the volume change ratio of a pre-defined patch of elements. The resulting constraint relaxation effectively overcomes volumetric locking and allows the successful use of simplex elements under finite strain near-incompressibility. As the original F-bar procedure, the present methodology preserves the displacement-based structure of the finite element equations as well as the strain-driven format of standard algorithms for numerical integration of path-dependent constitutive equations and can be used regardless of the constitutive model adopted. The new elements are implemented within an implicit quasi-static environment. In this context, a closed form expression for the exact tangent stiffness of the new elements is derived. This allows the use of the full Newton,Raphson scheme for equilibrium iterations. The performance of the proposed elements is assessed by means of a comprehensive set of benchmarking two- and three-dimensional numerical examples. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A high-temperature chemical kinetic model for primary reference fuels

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 7 2007
Marcos Chaos
A chemical kinetic mechanism has been developed to describe the high-temperature oxidation and pyrolysis of n -heptane, iso -octane, and their mixtures. An approach previously developed by this laboratory was used here to partially reduce the mechanism while maintaining a desired level of detailed reaction information. The relevant mechanism involves 107 species undergoing 723 reactions and has been validated against an extensive set of experimental data gathered from the literature that includes shock tube ignition delay measurements, premixed laminar-burning velocities, variable pressure flow reactor, and jet-stirred reactor species profiles. The modeled experiments treat dynamic systems with pressures up to 15 atm, temperatures above 950 K, and equivalence ratios less than approximately 2.5. Given the stringent and comprehensive set of experimental conditions against which the model is tested, remarkably good agreement is obtained between experimental and model results. © 2007 Wiley Periodicals, Inc. Int J Chem Kinet 39: 399,414, 2007 [source]


Precipitation trends over the Russian permafrost-free zone: removing the artifacts of pre-processing

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 6 2001
Pavel Ya.
Abstract Rain gauge changes, changes in the number of observations per day, and inconsistent corrections to observed precipitation data during the 20th century of the meteorological network of the former Soviet Union make it difficult to address the issue of century time-scale precipitation changes. In this paper, we use daily and sub-daily synoptic data to account for the effects of these changes on the instrumental homogeneity of precipitation measurements over the Russian permafrost-free zone (RPF, most populous western and central parts of the country). Re-adjustments that were developed during this assessment allow us to (a) develop a system of scale corrections that remove the inhomogeneity owing to wetting/observation time changes over most of the former Soviet Union during the past century, and (b) to estimate precipitation trends over the RPF, reconciling previously contradictory results. The trend that emerges is an increase of about 5% per century. This estimate can be further refined after a more comprehensive set of supplementary data (precipitation type and wind) and metadata (information about the exposure of meteorological sites) is employed. Copyright © 2001 Royal Meteorological Society [source]


Theoretical calculations of transition probabilities and oscillator strengths for Ti III and Ti IV

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 2 2009
Tian-Yi Zhang
Abstract Due to the complicated electronic configuration of atoms and ions of the transition metal elements, the studies for properties such as transition probabilities and oscillator strengths for these atoms and ions are not systematic. Because of the existence in a variety of stellar objects and wide use in the field of astrophysics, titanium has long been of interest for many researchers. In this article within the Weakest Bound Electron Potential Model (WBEPM) theory, comprehensive set of calculations for transition probabilities and oscillator strengths for Ti III and Ti IV are performed. Many of our results had no previous experimental or theoretical values, so these predictive results could be of some value to the workers in this field. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source]


Nonlinear Indices of Heart Rate Variability in Chronic Heart Failure Patients: Redundancy and Comparative Clinical Value

JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 4 2007
ROBERTO MAESTRI M.S.
Aims: We aimed to assess the mutual interrelationships and to compare the prognostic value of a comprehensive set of nonlinear indices of heart rate variability (HRV) in a population of chronic heart failure (CHF) patients. Methods and Results: Twenty nonlinear HRV indices, representative of symbolic dynamics, entropy, fractality-multifractality, predictability, empirical mode decomposition, and Poincaré plot families, were computed from 24-hour Holter recordings in 200 stable CHF patients in sinus rhythm (median age [interquartile range]: 54 [47,58] years, LVEF: 23 [19,28]%, NYHA class II,III: 88%). End point for survival analysis (Cox model) was cardiac death or urgent transplantation. Homogeneous variables were grouped by cluster analysis, and in each cluster redundant variables were discarded. A prognostic model including only known clinical and functional risk factors was built and the ability of each selected HRV variable to add prognostic information to this model assessed. Bootstrap resampling was used to test the models stability. Four nonlinear variables showed a correlation >0.90 with classical linear ones and were discarded. Correlations >0.80 were found between several nonlinear variables. Twelve clusters were obtained and from each cluster a candidate predictor was selected. Only two variables (from empirical mode decomposition and symbolic dynamics families) added prognostic information to the clinical model. Conclusion: This exploratory study provides evidence that, despite some redundancies in the informative content of nonlinear indices and strong differences in their prognostic power, quantification of nonlinear properties of HRV provides independent information in risk stratification of CHF patients. [source]


Dissecting Damages: An Empirical Exploration of Sexual Harassment Awards

JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 1 2006
Catherine M. Sharkey
My empirical study first replicates and then extends a prior preliminary empirical study by Cass Sunstein and Judy Shih of sexual harassment damages awards. It covers a comprehensive set of 232 cases in which plaintiffs won some positive amount of compensatory damages from state and federal, trial and appellate court decisions from 1982,2004 (published either in official reporters or solely on Westlaw). Contrary to Sunstein and Shih's finding, my analysis of these data reveals a consistent, and statistically significant, positive relationship between punitive and compensatory damages (at least in cases where punitive damages are awarded). My new empirical study then employs dependent variables that, in my view, are more theoretically and statistically sound than those employed by Sunstein and Shih and others who have focused exclusively on the relationship between punitive and compensatory damages: total combined damages (i.e., all compensatory and punitive damages), and what I term "outrage" damages, or combined noneconomic compensatory and punitive damages. My empirical results, using these new dependent variables, essentially confirm Sunstein and Shih's conclusions regarding the irrelevance of variables pertaining to the nature and severity of harassment. What my study reveals as crucial predictive factors, by contrast, are factors pertaining to damages limitations. My study highlights that these factors,including the effect of the 1991 Civil Rights Act, and whether plaintiffs append state civil rights and tort claims to their Title VII claims,are critical to a fuller understanding of damages determinations in sexual harassment cases. [source]


A novel search framework for multi-stage process scheduling with tight due dates

AICHE JOURNAL, Issue 8 2010
Yaohua He
Abstract This article improves the original genetic algorithm developed by He and Hui (Chem Eng Sci. 2007; 62:1504,1527) and proposes a novel global search framework (GSF) for the large-size multi-stage process scheduling problems. This work first constructs a comprehensive set of position selection rules according to the impact factors analysis presented by He and Hui (in this publication in 2007), and then selects suitable rules for schedule synthesis. In coping with infeasibility emerging during the search, a penalty function is adopted to force the algorithm to approach the feasible solutions. The large-size problems with tight due dates are challenging to the current solution techniques. Inspired by the gradient used in numerical analysis, we treat the deviation existing among the computational tests of the algorithm as evolutionary gradient. Based on this concept, a GSF is laid out to fully utilize the search ability of the current algorithm. Numerical experiments indicate that the proposed search framework solves such problems with satisfactory solutions. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


Four "lessons learned" while implementing a multi-site caries prevention trial

JOURNAL OF PUBLIC HEALTH DENTISTRY, Issue 3 2010
James D. Bader DDS
Abstract As the number of dental-related randomized clinical trials (RCTs) increases, there is a need for literature to help investigators inexperienced in conducting RCTs design and implement studies. This commentary describes four "lessons learned" or considerations important in the planning and initial implementation of RCTs in dentistry that, to our knowledge, have not been discussed in the general dental literature describing trial techniques. These considerations are a) preparing or securing a thorough systematic review; b) developing a comprehensive set of study documents; c) designing and testing multiple recruitment strategies; and d) employing a run-in period prior to enrollment. Attention to these considerations in the planning phases of a dental RCT can help ensure that the trial is clinically relevant while also maximizing the likelihood that its implementation will be successful. [source]


The unrealized potential of everyday technology as a context for learning

JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 7 2001
Gary Benenson
This four-part article argues that technology education should play a far more substantial role in the schools. In the first section the article broadly defines the term technology to include the artifacts of everyday life as well as environments and systems. Second is a description of the City Technology Curriculum Guides project, of which most of the thinking in this article is a product. The third section presents a comprehensive set of goals for elementary technology education, using classroom examples from City Technology. Many of these goals coincide with the goals of other school subjects, including math, science, English language arts and social studies. The concluding section suggests a broad role for technology education in providing a context for learning in these areas. © 2001 John Wiley & Sons, Inc. J Res Sci Teach 38: 730,745, 2001 [source]


LOW-INCOME HOMEOWNERSHIP: DOES IT NECESSARILY MEAN SACRIFICING NEIGHBORHOOD QUALITY TO BUY A HOME?

JOURNAL OF URBAN AFFAIRS, Issue 2 2010
ANNA M. SANTIAGO
ABSTRACT:,Questions have been raised about the wisdom of low-income homeownership policies for many reasons. One potential reason to be skeptical: low-income homebuyers perhaps may be constrained to purchase homes in disadvantaged neighborhoods. This is a potential problem because home purchases in such neighborhoods: (1) may limit appreciation; (2) may reduce quality of life for adults; and (3) may militate against reputed advantages of homeownership for children. Our study examines the neighborhood conditions of a group of 126 low-income homebuyers who purchased their first home with assistance from the Home Ownership Program (HOP) operated by the Denver Housing Authority. Our approach is distinguished by its use of a comprehensive set of objective and subjective indicators measuring the neighborhood quality of pre-move and post-move neighborhoods. Do low-income homebuyers sacrifice neighborhood quality to buy their homes? Our results suggest that the answer to this question is more complex than it might at first appear. On the one hand, HOP homebuyers purchased in a wide variety of city and suburban neighborhoods. Nonetheless, a variety of neighborhood quality indicators suggest that these neighborhoods, on average, were indeed inferior to those of Denver homeowners overall and to those in the same ethnic group. However, our analyses also revealed that their post-move neighborhoods were superior to the ones they lived in prior to homeownership. Moreover, very few HOP destination neighborhoods evinced severe physical, environmental, infrastructural, or socioeconomic problems, as measured by a wide variety of objective indicators or by the homebuyers' own perceptions. Indeed, only 10% of HOP homebuyers perceived that their new neighborhoods were worse than their prior ones, and only 8% held pessimistic expectations about their new neighborhoods' quality of life. Finally, we found that Black homebuyers fared less well than their Latino counterparts, on average, in both objective and subjective measures. [source]


Is Board Size an Independent Corporate Governance Mechanism?

KYKLOS INTERNATIONAL REVIEW OF SOCIAL SCIENCES, Issue 3 2004
Stefan Beiner
SUMMARY Using a simultaneous equations framework with a comprehensive set of publicly listed Swiss companies, our findings suggest that the size of the board of directors is an independent corporate governance mechanism. This implies that any potential relationship between board size and firm valuation is indeed causal. However, in contrast to previous studies, we do not uncover a significant relationship between board size and firm valuation, which can be interpreted as support for the hypothesis of the existence of an optimal board size. On average, firms choose the number of board members just optimally. This indicates that cross-sectional variations in board size to a large extent reflect differences in firms' underlying environment, and not mistaken choices. ZUSAMMENFASSUNG Die Ergebnisse der Schätzung eines simultanen Gleichungssystems mit einer repräsentativen Stichprobe börsengehandelter Schweizer Unternehmen zeigen, dass die Grösse des Verwaltungsrates einen eigenständigen Corporate Governance Mechanismus darstellt. Damit kann ein möglicher Zusammenhang zwischen der Grösse des Verwaltungsrates und dem Unternehmenswert als kausal interpretiert werden. Die empirischen Ergebnisse zeigen aber keine Evidenz für einen derartigen Zusammenhang, was die Hypothese einer optimalen Grösse des Verwaltungsrates stützt. Im Durchschnitt weist der Verwaltungsrat Schweizer Unternehmen die optimale Grösse auf. Die Unterschiede in der Grösse der Verwaltungsräte der Stichprobenunternehmen können zum grossen Teil durch unternehmensspezifische Einflussgrössen erklärt werden, und sind nicht auf eine falsche Besetzung des Verwaltungsrates zurückzuführen. RÉSUMÉ En utilisant un système d'équations simultané, estimé sur l'ensemble des entreprises inscrites à la bourse suisse, nos résultats montrent que la taille du conseil d'administration résulte d'un mécanisme indépendant de gouvernance d'entreprise. Ceci implique qu'il existe véritablement un lien causal entre la taille du conseil d'administration et la valeur de l'entreprise. Néanmoins, contrairement aux études précédentes, nous n'avons pas découvert de relations significatives entre ces deux éléments, ce que l'on peut interpréter comme support pour l'hypothèse de l'existence d'une grandeur optimale du conseil d'administration. Le nombre de membres du conseil d'administration est en moyenne choisi de manière optimale. Ceci indique que la variation en coupe transversale de la taille du Conseil d'administration reflète en majeure partie des différences dans l'environnement de chaque enterprise plutôt que des mauvaises décisions. [source]


Multifunctional Magnetoplasmonic Nanoparticle Assemblies for Cancer Therapy and Diagnostics (Theranostics),

MACROMOLECULAR RAPID COMMUNICATIONS, Issue 2 2010
Wei Chen
Abstract In this work, we describe the preparation and biomedical functionalities of complex nanoparticle assemblies with magnetoplasmonic properties suitable for simultaneous cancer therapy and diagnostics (theranostics). Most commonly magnetoplasmonic nanostructures are made by careful adaptation of metal reduction protocols which is both tedious and restrictive. Here we apply the strategy of nanoscale assemblies to prepare such systems from individual building blocks. The prepared superstructures are based on magnetic Fe3O4 nanoparticles encapsulated in silica shell representing the magnetic module. The cores are surrounded in a corona-like fashion by gold nanoparticles representing the plasmonic module. As additional functionality they were also coated by poly(ethyleneglycol) chains as a cloaking agent to extend the blood circulation time. The preparation is exceptionally simple and allows one to vary the contribution of each function. Both modules can carry drugs and, in this study, they were loaded with the potential anticancer drug curcumin. A comprehensive set of microscopy, spectroscopy and biochemical methods were applied to characterize both imaging and therapeutic function of the nanoparticle assemblies against leukemia HL-60 cells. High contrast magnetic resonance images and high apoptosis rates demonstrate the success of assembly approach for the preparation of magnetoplasmonic nanoparticles. This technology allows one to easily "dial in" the functionalities in the clinical setting for personalized theranostic regiments. [source]


Reliable assessment of high temperature oxidation resistance by the development of a comprehensive code of practice for thermocycling oxidation testing , European COTEST project ,

MATERIALS AND CORROSION/WERKSTOFFE UND KORROSION, Issue 1 2006
M. Schütze
Abstract The cyclic oxidation test is the most often used tool in industry to characterise the high temperature oxidation/corrosion resistance of technical materials in the laboratory. In the past, however, there has been the problem of a lack of intercomparability of data from different laboratories and sometimes even from different test runs in the same lab since no general guidelines or standards were existing for this type of test. Being aware of this situation the European COTEST research project was started with 23 participants from 11 countries including representatives from industry, universities, private institutes and national research labs. The present paper reports about the outcome of this project after three years. The project consisted of 8 work packages including literature search on the state-of-the-art at the beginning of the work, experimental investigations supported by a statistics approach in order to quantify the impact of the different test parameters on the test results, a validation testing phase and the development of a comprehensive set of guidelines. The latter is available on the internet and serves as a basis for a future ISO standard for this type of test. [source]


Development of an oligonucleotide microarray method for Salmonella serotyping

MICROBIAL BIOTECHNOLOGY, Issue 6 2008
B. Tankouo-Sandjong
Summary Adequate identification of Salmonella enterica serovars is a prerequisite for any epidemiological investigation. This is traditionally obtained via a combination of biochemical and serological typing. However, primary strain isolation and traditional serotyping is time-consuming and faster methods would be desirable. A microarray, based on two housekeeping and two virulence marker genes (atpD, gyrB, fliC and fljB), has been developed for the detection and identification of the two species of Salmonella (S. enterica and S. bongori), the five subspecies of S. enterica (II, IIIa, IIIb, IV, VI) and 43 S. enterica ssp. enterica serovars (covering the most prevalent ones in Austria and the UK). A comprehensive set of probes (n = 240), forming 119 probe units, was developed based on the corresponding sequences of 148 Salmonella strains, successfully validated with 57 Salmonella strains and subsequently evaluated with 35 blind samples including isolated serotypes and mixtures of different serotypes. Results demonstrated a strong discriminatory ability of the microarray among Salmonella serovars. Threshold for detection was 1 colony forming unit per 25 g of food sample following overnight (14 h) enrichment. [source]


Quantitative Structure-Activity Relationships of Streptococcus pneumoniae MurD Transition State Analogue Inhibitors

MOLECULAR INFORMATICS, Issue 6 2004
Miha Kotnik
Abstract Quantitative structure-activity relationship (QSAR) studies on a set of Streptococcus pneumoniae MurD transition-state inhibitors were performed, using a comprehensive set of molecular descriptors calculated by CODESSA software. Multiple and best multiple linear regressions were applied to generate models for predicting their inhibitory activity. The results (the best model had r2 = 0.8818, s2 = 0.0749, F = 87.04 and r=0.8488) demonstrate the importance of hydrogen bonding and that a matching conformation of ligands for interaction with the enzyme active site is required. [source]


A comprehensive set of simulations studying the influence of gas expulsion on star cluster evolution

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007
H. Baumgardt
ABSTRACT We have carried out a large set of N -body simulations studying the effect of residual-gas expulsion on the survival rate, and final properties of star clusters. We have varied the star formation efficiency (SFE), gas expulsion time-scale and strength of the external tidal field, obtaining a three-dimensional grid of models which can be used to predict the evolution of individual star clusters or whole star cluster systems by interpolating between our runs. The complete data of these simulations are made available on the internet. Our simulations show that cluster sizes, bound mass fraction and velocity profile are strongly influenced by the details of the gas expulsion. Although star clusters can survive SFEs as low as 10 per cent if the tidal field is weak and the gas is removed only slowly, our simulations indicate that most star clusters are destroyed or suffer dramatic loss of stars during the gas removal phase. Surviving clusters have typically expanded by a factor of 3 or 4 due to gas removal, implying that star clusters formed more concentrated than as we see them today. Maximum expansion factors seen in our runs are around 10. If gas is removed on time-scales smaller than the initial crossing time, star clusters acquire strongly radially anisotropic velocity dispersions outside their half-mass radii. Observed velocity profiles of star clusters can therefore be used as a constraint on the physics of cluster formation. [source]


Next-Generation Architecture to Support Simulation-Based Acquisition

NAVAL ENGINEERS JOURNAL, Issue 4 2000
Dr. B. Chadha
ABSTRACT The ability to make good design decisions early is a significant driver for simulation-based acquisition to effectively lower life-cycle cost and cycle time. Building virtual prototypes, enabling one to analyze the impact of decisions, achieves effective simulation-based acquisition processes. Virtual prototypes need to support a comprehensive set of analyses that will be performed on the product; hence, all aspects of product data and behavior need to be represented. Building virtual prototypes of complex systems being designed by a multi-organizational team requires new architectural concepts and redesigned processes. Implementation of these new architectures is complex and leveraging commercial technologies is necessary to achieve feasible solutions. One must also carefully consider the state of the current commercial technologies and frameworks as well as the organizational and cultural aspects of organizations that use these systems. This paper describes key architectural principles that one must address for a cost-effective implementation. The paper then discusses key architectural concepts and trade-offs that are necessary to support virtual prototypes of complex systems. [source]


Physical,chemical determinants of coil conformations in globular proteins

PROTEIN SCIENCE, Issue 6 2010
Lauren L. Perskie
Abstract We present a method with the potential to generate a library of coil segments from first principles. Proteins are built from ,-helices and/or ,-strands interconnected by these coil segments. Here, we investigate the conformational determinants of short coil segments, with particular emphasis on chain turns. Toward this goal, we extracted a comprehensive set of two-, three-, and four-residue turns from X-ray,elucidated proteins and classified them by conformation. A remarkably small number of unique conformers account for most of this experimentally determined set, whereas remaining members span a large number of rare conformers, many occurring only once in the entire protein database. Factors determining conformation were identified via Metropolis Monte Carlo simulations devised to test the effectiveness of various energy terms. Simulated structures were validated by comparison to experimental counterparts. After filtering rare conformers, we found that 98% of the remaining experimentally determined turn population could be reproduced by applying a hydrogen bond energy term to an exhaustively generated ensemble of clash-free conformers in which no backbone polar group lacks a hydrogen-bond partner. Further, at least 90% of longer coil segments, ranging from 5- to 20 residues, were found to be structural composites of these shorter primitives. These results are pertinent to protein structure prediction, where approaches can be divided into either empirical or abinitio methods. Empirical methods use database-derived information; abinitio methods rely on physical,chemical principles exclusively. Replacing the database-derived coil library with one generated from first principles would transform any empirically based method into its corresponding abinitio homologue. [source]


The detection, correlation, and comparison of peptide precursor and product ions from data independent LC-MS with data dependant LC-MS/MS

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2009
Scott J. Geromanos
Abstract The detection, correlation, and comparison of peptide and product ions from a data independent LC-MS acquisition strategy with data dependent LC-MS/MS is described. The data independent mode of acquisition differs from an LC-MS/MS data acquisition since no ion transmission window is applied with the first mass analyzer prior to collision induced disassociation. Alternating the energy applied to the collision cell, between low and elevated energy, on a scan-to-scan basis, provides accurate mass precursor and associated product ion spectra from every ion above the LOD of the mass spectrometer. The method therefore provides a near 100% duty cycle, with an inherent increase in signal intensity due to the fact that both precursor and product ion data are collected on all isotopes of every charge-state across the entire chromatographic peak width. The correlation of product to precursor ions, after deconvolution, is achieved by using reconstructed retention time apices and chromatographic peak shapes. Presented are the results from the comparison of a simple four protein mixture, in the presence and absence of an enzymatically digested protein extract from Escherichia coli. The samples were run in triplicate by both data dependant analysis (DDA) LC-MS/MS and data-independent, alternate scanning LC-MS. The detection and identification of precursor and product ions from the combined DDA search results of the four protein mixture were used for comparison to all other data. Each individual set of data-independent LC-MS data provides a more comprehensive set of detected ions than the combined peptide identifications from the DDA LC-MS/MS experiments. In the presence of the complex E. coli background, over 90% of the monoisotopic masses from the combined LC-MS/MS identifications were detected at the appropriate retention time. Moreover, the fragmentation pattern and number of associated elevated energy product ions in each replicate experiment was found to be very similar to the DDA identifications. In the case of the corresponding individual DDA LC-MS/MS experiment, 43% of the possible detectable peptides of interest were identified. The presented data illustrates that the time-aligned data from data-independent alternate scanning LC-MS experiments is highly comparable to the data obtained via DDA. The obtained information can therefore be effectively and correctly deconvolved to correlate product ions with parent precursor ions. The ability to generate precursor-product ion tables from this information and subsequently identify the correct parent precursor peptide will be illustrated in a companion manuscript. [source]


Morphology analysis for technology roadmapping: application of text mining

R & D MANAGEMENT, Issue 1 2008
Byungun Yoon
The practice of technology roadmapping (TRM) has received much attention from researchers and practitioners, to support planning and forecasting in companies and sectors. However, little research has focused on the support of well-organized information for more effective roadmapping and the presentation of in-depth configurations of new products or technology. This paper proposes a roadmapping methodology to assist decision-making by applying a systematic approach based on quantitative data. To this end, key information is extracted from documents such as product manuals and patent documents by text mining, which is then used to identify the morphology of existing products and technology. Morphology analysis (MA) also plays a crucial role in deriving promising opportunities for new development of product or technology by matching product and technology morphology. Therefore, MA-based TRM can enable the effective exploitation of large quantities of significant information that might otherwise be left untapped, supporting innovation by generating a comprehensive set of detailed product and technology configurations. The proposed MA-based TRM approach can be applied to both incremental and radical innovation, supporting both market pull and technology push. The method is illustrated with a detailed example for mobile phones to demonstrate its practical application. [source]


A Landscape Approach for Ecologically Based Management of Great Basin Shrublands

RESTORATION ECOLOGY, Issue 5 2009
Michael J. Wisdom
Abstract Native shrublands dominate the Great Basin of western of North America, and most of these communities are at moderate or high risk of loss from non-native grass invasion and woodland expansion. Landscape-scale management based on differences in ecological resistance and resilience of shrublands can reduce these risks. We demonstrate this approach with an example that focuses on maintenance of sagebrush (Artemisia spp.) habitats for Greater Sage-grouse (Centrocercus urophasianus), a bird species threatened by habitat loss. The approach involves five steps: (1) identify the undesired disturbance processes affecting each shrubland community type; (2) characterize the resistance and resilience of each shrubland type in relation to the undesired processes; (3) assess potential losses of shrublands based on their resistance, resilience, and associated risk; (4) use knowledge from these steps to design a landscape strategy to mitigate the risk of shrubland loss; and (5) implement the strategy with a comprehensive set of active and passive management prescriptions. Results indicate that large areas of the Great Basin currently provide Sage-grouse habitats, but many areas of sagebrush with low resistance and resilience may be lost to continued woodland expansion or invasion by non-native annual grasses. Preventing these losses will require landscape strategies that prioritize management areas based on efficient use of limited resources to maintain the largest shrubland areas over time. Landscape-scale approaches, based on concepts of resistance and resilience, provide an essential framework for successful management of arid and semiarid shrublands and their native species. [source]