Home About us Contact | |||
Important Problem (important + problem)
Selected AbstractsFAST AND ROBUST INCREMENTAL ACTION PREDICTION FOR INTERACTIVE AGENTSCOMPUTATIONAL INTELLIGENCE, Issue 1 2005Jonathan Dinerstein The ability for a given agent to adapt on-line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, because humans learn quickly and behave nondeterministically. In this paper, we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case-based approach, where the behavior of the other agent is learned in the form of state,action pairs. We generalize these cases either through continuous k -nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships. [source] How accurately can parameters from exponential models be estimated?CONCEPTS IN MAGNETIC RESONANCE, Issue 2 2005A Bayesian view Abstract Estimating the amplitudes and decay rate constants of exponentially decaying signals is an important problem in NMR. Understanding how the uncertainty in the parameter estimates depends on the data acquisition parameters and on the "true" but unknown values of the exponential signal parameters is an important step in designing experiments and determining the amount and quality of the data that must be gathered to make good parameter estimates. In this article, Bayesian probability theory is applied to this problem. Explicit relationships between the data acquisition parameters and the "true" but unknown exponential signal parameters are derived for the cases of data containing one and two exponential signal components. Because uniform prior probabilities are purposely employed, the results are broadly applicable to experimental parameter estimation. © 2005 Wiley Periodicals, Inc. Concepts Magn Reson Part A 27A: 73,83, 2005 [source] A definition of and linguistic support for partial quiescenceCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2008Billy Yan-Kit Man Abstract The global quiescence (GQ) of a distributed computation (or distributed termination detection) is an important problem. Some concurrent programming languages and systems provide GQ detection as a built-in feature so that programmers do not need to write special synchronization code to detect quiescence. This paper introduces partial quiescence (PQ), which generalizes quiescence detection to a specified part of a distributed computation. PQ is useful, for example, when two independent concurrent computations that both rely on GQ need to be combined into a single program. The paper describes how we have designed and implemented a PQ mechanism within an experimental version of the JR concurrent programming language, and have gained experience with several representative applications. Our early results are promising qualitatively and quantitatively. Copyright © 2007 John Wiley & Sons, Ltd. [source] Cognitive profiles of chinese adolescents with dyslexiaDYSLEXIA, Issue 1 2010Kevin K. H. Chung Abstract The present study sought to identify cognitive abilities that might distinguish Hong Kong Chinese adolescents with and without dyslexia and examined the cognitive profile of dyslexic adolescents in order to better understand this important problem. The performance of 27 Chinese adolescents with childhood diagnoses of dyslexia was compared with 27 adolescents of the same chronological age (CA) and 27 of matched reading level (RL) on measures of literacy and cognitive abilities: Chinese word reading, one-minute reading, reading comprehension, dictation, verbal short-term memory, rapid naming, visual-orthographic knowledge, morphological and phonological awareness. The results indicated that the dyslexic group scored lower than the CA group, but similar to the RL group, especially in the areas of rapid naming, visual-orthographic knowledge and morphological awareness, with over half having multiple deficits exhibited 2 or more cognitive areas. Furthermore, the number of cognitive deficits was associated with the degree of reading and spelling impairment. These findings suggest that adolescents with childhood diagnoses of dyslexia have persistent literacy difficulties and seem to have multiple causes for reading difficulties in Chinese. Copyright © 2009 John Wiley & Sons, Ltd. [source] Degradation of alkanes by bacteriaENVIRONMENTAL MICROBIOLOGY, Issue 10 2009Fernando Rojo Summary Pollution of soil and water environments by crude oil has been, and is still today, an important problem. Crude oil is a complex mixture of thousands of compounds. Among them, alkanes constitute the major fraction. Alkanes are saturated hydrocarbons of different sizes and structures. Although they are chemically very inert, most of them can be efficiently degraded by several microorganisms. This review summarizes current knowledge on how microorganisms degrade alkanes, focusing on the biochemical pathways used and on how the expression of pathway genes is regulated and integrated within cell physiology. [source] The multi-clump finite mixture distribution and model selectionENVIRONMETRICS, Issue 2 2010Sudhir R. Paul Abstract In practical data analysis, often an important problem is to determine the number of clumps in discrete data in the form of proportions. This can be done through model selection in a multi-clump finite mixture model. In this paper, we propose bootstrap likelihood ratio tests to test the fit of a multinomial model against the single clump finite mixture distribution and to determine the number of clumps in the data, that is, to select a model with appropriate number of clumps. Shortcomings of some traditional large sample procedures are also shown. Three datasets are analyzed. Copyright © 2009 John Wiley & Sons, Ltd. [source] Joint moments in the distal forelimbs of jumping horses during landingEQUINE VETERINARY JOURNAL, Issue 4 2001L. S. MEERSHOEK Summary Tendon injuries are an important problem in athletic horses and are probably caused by excessive loading of the tendons during demanding activities. As a first step towards understanding these injuries, the tendon loading was quantified during jump landings. Kinematics and ground reaction forces were collected from the leading and trailing forelimbs of 6 experienced jumping horses. Joint moments were calculated using inverse dynamic analysis. It was found that the variation of movement and loading patterns was small, both within and between horses. The peak flexor joint moments in the coffin and fetlock joints were larger in the trailing limb (,0.62 and ,2.44 Nm/kg bwt, respectively) than in the leading limb (,0.44 and ,1.93 Nm/kg bwt, respectively) and exceeded literature values for trot by 82 and 45%. Additionally, there was an extensor coffin joint moment in the first half of the stance phase of the leading limb (peak value 0.26 ± 0.18 Nm/kg bwt). From these results, it was concluded that the loading of the flexor tendons during landing was higher in the trailing than in the leading limb and that there was an unexpected loading of the extensor tendon in the leading limb. [source] Gaussian inputs: performance limits over non-coherent SISO and MIMO channelsEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2007Rasika R. Perera Performance limits of information transfer over a discrete time memoryless Rayleigh fading channel with neither the receiver nor the transmitter knowing the fading coefficients except its statistics is an important problem in information theory. We derive closed form expressions for the mutual information of single input single output (SISO) and multiple input multiple output (MIMO) Rayleigh fading channels for any antenna number at any signal to noise ratio (SNR). Using these expressions, we show that the maximum mutual information of non-coherent Rayleigh fading MIMO channels is achieved with a single transmitter and multiple receivers when the input distribution is Gaussian. We show that the addition of transmit antennas for a fixed number of receivers result in a reduction of mutual information. Furthermore, we argue that the mutual information is bounded by the SNR in both SISO and MIMO systems showing the sub-optimality of Gaussian signalling in non-coherent Rayleigh fading channels. Copyright © 2006 AEIT [source] THE PHENOTYPIC VARIANCE WITHIN PLASTIC TRAITS UNDER MIGRATION-MUTATION-SELECTION BALANCEEVOLUTION, Issue 6 2006Xu-Sheng Zhang Abstract How phenotypic variances of quantitative traits are influenced by the heterogeneity in environment is an important problem in evolutionary biology. In this study, both genetic and environmental variances in a plastic trait under migration-mutation-stabilizing selection are investigated. For this, a linear reaction norm is used to approximate the mapping from genotype to phenotype, and a population of clonal inheritance is assumed to live in a habitat consisting of many patches in which environmental conditions vary among patches and generations. The life cycle is assumed to be selection-reproduction-mutation-migration. Analysis shows that phenotypic plasticity is adaptive if correlations between the optimal phenotype and environment have become established in both space and/or time, and it is thus possible to maintain environmental variance (VE) in the plastic trait. Under the special situation of no mutation but maximum migration such that separate patches form an effective single-site habitat, the genotype that maximizes the geometric mean fitness will come to fixation and thus genetic variance (VG) cannot be maintained. With mutation and/or restricted migration, VG can be maintained and it increases with mutation rate but decreases with migration rate; whereas VE is little affected by them. Temporal variation in environmental quality increases VG while its spatial variance decreases VG. Variation in environmental conditions may decrease the environmental variance in the plastic trait. [source] A Graphene Oxide,Streptavidin Complex for Biorecognition , Towards Affinity PurificationADVANCED FUNCTIONAL MATERIALS, Issue 17 2010Zunfeng Liu Abstract In our postgenomic era, understanding of protein-protein interactions by characterizing the structure of the corresponding protein complex is becoming increasingly important. An important problem is that many protein complexes are only stable for a few minutes. Dissociation will occur when using the typical, time-consuming purification methods such as tandem affinity purification and multiple chromatographic separations. Therefore, there is an urgent need for a quick and efficient protein-complex purification method for 3D structure characterization. The graphene oxide (GO)·streptavidin complex is prepared via a GO·biotin·streptavidin strategy and used for affinity purification. The complex shows a strong biotin recognition capability and an excellent loading capacity. Capturing biotinylated DNA, fluorophores and Au nanoparticles on the GO·streptavidin complexes demonstrates the usefulness of the GO·streptavidin complex as a docking matrix for affinity purification. GO shows a high transparency towards electron beams, making it specifically well suited for direct imaging by electron microscopy. The captured protein complex can be separated via a filtration process or even via on-grid purification and used directly for single-particle analysis via cryo-electron microscopy. Therefore, the purification, sample preparation, and characterization are rolled into one single step. [source] Evolution of Permanent Deformations (or Memory) in Nafion 117 Membranes with Changes in Temperature, Relative Humidity and Time, and Its Importance in the Development of Medium Temperature PEMFCs,FUEL CELLS, Issue 4 2009G. Alberti Abstract An important problem for medium temperature polymer electrolyte fuel cells (MT PEMFCs) operating in the temperature range 90,140,°C is the short time-life of proton conducting membranes. To shed some light on the empirical annealing treatments used for increasing the membrane durability, a systematic research on the effects of thermal treatments of Nafion 117 membranes was undertaken with the hope that the information obtained could be useful for a better understanding of the real limits for MT PEMFCs. Kinetic experiments showed that, for each couple of T,RH values, the water taken up from the membrane reaches a constant value only after long times of equilibration (,200,h). Taking into account that the enlargements provoked by the water-uptake remain as permanent deformations when the samples are cooled, it was found that the evolution of the deformations provoked by changes in temperature and RH can be conveniently estimated at 20,°C by determining the water taken up after equilibration in liquid water. By relating the counter-elastic index of the matrix (nc(m)) to the extent of these deformations, a set of equations were obtained which allowed us to predict their evolution with changes of temperature and relative humidity. A good agreement with experimental values was found. The importance of this discovery for the development of MT PEMFCs is discussed. [source] Interleukin 18 causes hepatic ischemia/reperfusion injury by suppressing anti-inflammatory cytokine expression in miceHEPATOLOGY, Issue 3 2004Dan Takeuchi Hepatic ischemia/reperfusion injury is a clinically important problem. While the mechanisms of the initial event and subsequent neutrophil-dependent injury are somewhat understood, little is known about the regulation of endogenous hepatoprotective effects on this injury. Interleukin 12 (IL-12) plays a role in the induction of this injury, but involvement of interleukin 18 (IL-18) has not been clarified. Using a murine model of partial hepatic ischemia and subsequent reperfusion, the aim of the current study was to determine whether IL-18 is up-regulated during hepatic ischemia/reperfusion and to determine the role of endogenous IL-18 in the development and regulation of inflammatory hepatic ischemia/reperfusion injury. Hepatic IL-18 expression was up-regulated from 1 to 8 hours after reperfusion. Hepatic ischemia/reperfusion induced nuclear factor-,B (NF-,B) and activator protein 1 (AP-1) activation, as defined by electrophoretic mobility shift assay, and caused significant increases in liver neutrophil recruitment, apoptosis, hepatocellular injury, and liver edema as defined by liver myeloperoxidase content, terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate biotin nick end-labeling (TUNEL) staining, serum aminotransferase levels, and liver wet-to-dry weight ratios. In mice treated with neutralizing antibody to IL-18, ischemia/reperfusion-induced increases in CXC chemokine expression, activation of NF-,B and AP-1, and apoptosis were greatly reduced. Furthermore, under blockade of IL-18, anti-inflammatory cytokines such as IL-4 and IL-10 were greatly up-regulated. Signal transducer and activator of transcription 6 (STAT6) was significantly activated under blockade of IL-18. These conditions also caused significant reduction in liver neutrophil sequestration and liver injury. In conclusion, the data suggest that IL-18 is required for facilitating neutrophil-dependent hepatic ischemia/reperfusion injury through suppressing anti-inflammatory cytokine expression. (HEPATOLOGY 2004;39:699,710.) [source] Stat4 and Stat6 signaling in hepatic ischemia/reperfusion injury in mice: HO-1 dependence of Stat4 disruption-mediated cytoprotectionHEPATOLOGY, Issue 2 2003Xiu-Da Shen Ischemia/reperfusion (I/R) injury remains an important problem in clinical organ transplantation. There is growing evidence that T lymphocytes, and activated CD4+ T cells in particular, play a key role in hepatic I/R injury. This study analyzes the role of signal transducer and activator of transcription 4 (Stat4) and Stat6 signaling in liver I/R injury. Using a partial lobar warm ischemia model, groups of wild-type (WT), T cell,deficient, Stat4-/Stat6-deficient knockout (KO) mice were assessed for the extent/severity of I/R injury. Ninety minutes of warm ischemia followed by 6 hours of reperfusion induced a fulminant liver failure in WT and Stat6 KO mice, as assessed by hepatocellular damage (serum alanine aminotransferase [sALT] levels), neutrophil accumulation (myeloperoxidase [MPO] activity) and histology (Suzuki scores). In contrast, T cell deficiency (nu/nu mice) or disruption of Stat4 signaling (Stat4 KO mice) reduced I/R insult. Unlike adoptive transfer of WT or Stat6-deficient T cells, infusion of Stat4-deficient T cells failed to restore hepatic I/R injury and prevented tumor necrosis factor , (TNF-,) production in nu/nu mice. Diminished TNF-,/Th1-type cytokine messenger RNA (mRNA)/protein elaborations patterns, along with overexpression of heme oxygenase-1 (HO-1),accompanied hepatic cytoprotection in Stat4 KO recipients. In contrast, HO-1 depression restored hepatic injury in otherwise I/R resistant Stat4 KOs. In conclusion, Stat4 signaling is required for, whereas Stat4 disruption protects against, warm hepatic I/R injury in mice. The cytoprotection rendered by Stat4 disruption remains HO-1,dependent. [source] Do Australian companies manage earnings to meet simple earnings benchmarks?ACCOUNTING & FINANCE, Issue 1 2003David Holland Measurement error in unexpected accruals is an important problem for empirical earnings management research. Several recent studies avoid this problem by examining the pooled, cross,sectional distribution of reported earnings. Discontinuities in the distribution of reported earnings around key earnings thresholds may indicate the exercise of management discretion (i.e. earnings management). We apply this approach to the detection of earnings management by Australian firms. Our results generally indicate significantly more small earnings increases and small profits than expected and conversely, considerably fewer small earnings decreases and small losses than expected. These results are much stronger for larger Australian firms. We undertake an exploratory analysis of alternative explanations for our results and find some evidence consistent with management signalling its inside knowledge about the firm's expected future profitability to smooth earnings, as opposed to ,management intent to deceive' as an explanation for our results. [source] Kinematic modelling of shear band localization using discrete finite elementsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 4 2003X. Wang Abstract Modelling shear band is an important problem in analysing failure of earth structures in soil mechanics. Shear banding is the result of localization of deformation in soil masses. Most finite element schemes are unable to model discrete shear band formation and propagation due to the difficulties in modelling strain and displacement discontinuities. In this paper, a framework to generate shear band elements automatically and continuously is developed. The propagating shear band is modelled using discrete shear band elements by splitting the original finite element mesh. The location or orientation of the shear band is not predetermined in the original finite element mesh. Based on the elasto-perfect plasticity with an associated flow rule, empirical bifurcation and location criteria are proposed which make band propagation as realistic as possible. Using the Mohr,Coulomb material model, various results from numerical simulations of biaxial tests and passive earth pressure problems have shown that the proposed framework is able to display actual patterns of shear banding in geomaterials. In the numerical examples, the occurrence of multiple shear bands in biaxial test and in the passive earth pressure problem is confirmed by field and laboratory observations. The effects of mesh density and mesh alignment on the shear band patterns and limit loads are also investigated. Copyright © 2003 John Wiley & Sons, Ltd. [source] Comparative study between two numerical methods for oxygen diffusion problemINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 8 2009Vildan GülkaçArticle first published online: 28 APR 200 Abstract Two approximate numerical solutions of the oxygen diffusion problem are defined using three time-level of Crank,Nicolson equation and Gauss,Seidel iteration for three time-level of implicit method. Oxygen diffusion in a sike cell with simultaneous absorption is an important problem and has a wide range of medical applications. The problem is mathematically formulated through two different stages. At the first stage, the stable case having no oxygen transition in the isolated cell is searched, whereas at the second stage the moving boundary problem of oxygen absorbed by the tissues in the cell is searched. The results obtained by three time-level of implicit method and Gauss,Seidel iteration for three time-level of implicit method and the results gave a good agreement with the previous methods (J. Inst. Appl. Math. 1972; 10:19,33; 1974; 13:385,398; 1978; 22:467,477). Copyright © 2008 John Wiley & Sons, Ltd. [source] Numerical investigation of the effect of inlet condition on self-excited oscillation of wet steam flow in a supersonic turbine cascade,INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2009Wu Xiaoming Abstract Self-excited oscillation can be induced due to the interaction between condensation process and local transonic condition in condensing flow, which is an important problem in wet steam turbine. With an Eulerian/Eulerian numerical model, the self-excited oscillation of wet steam flow is investigated in a supersonic turbine cascade. Owing to supercritical heat addition to the subsonic flow in the convergent part of the cascade, the oscillation frequency decreases with increased inlet supercooling. Mass flow rate increases in the oscillating flow due to the greater supersaturation in condensation process, while the increase will be suppressed with the flow oscillation. Higher inlet supercooling leads to the fact that the condensation process moves upstream and the loss increases. Moreover, some predictions of oscillation effects on outflow angle and aerodynamic force are also presented. Finally, heterogeneous condensations with inlet wetness and periodic inlet conditions, as a result of the interference between stator and rotor, are discussed. Copyright © 2008 John Wiley & Sons, Ltd. [source] Gender in elderly suicide: analysis of coroners inquests of 200 cases of elderly suicide in Cheshire 1989,2001INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 12 2003Emad Salib Abstract Objectives The aim of this study is to review gender differences in elderly suicide in relation to specific social aspects of the suicidal process and health care contact before death. Such information may have practical value in identifying and targeting vulnerable elderly in whom suicide may be potentially preventable. Methods Data were extracted from the records of coroner's inquests into all reported suicide of persons aged 60 and over, in Cheshire over a period of 13 years 1989,2001. The Coroner's office covers the whole county of Cheshire (population 1,000,000). Results Men were less likely to have been known to psychiatric services (Odds Ratio [OR] 0.4 95% 0.2,0.6) and with less frequently reported history of previous attempted suicide compared to women (OR 0.5 95% Confidence Intervals [CI] 0.2,1). All deceased from ethnic minorities were men, none of whom had been known to psychiatric services. There was no significant difference between women and men in relation to, physical or psychiatric morbidity, GP contact prior to suicide, intimation of intent or living alone. Of suicide victims not known to services a surprisingly high proportion of 38% and 16% were found to have psychiatric morbidity in men and women respectively. Conclusion Suicide is an important problem in the elderly with gender playing an important part in their social behaviour but a high proportion of the deceased were not known to local services. Primary Care professionals have an important role to play in reducing elderly suicide as most contact with the health service in elderly suicide seem to be with GPs. Copyright © 2003 John Wiley & Sons, Ltd. [source] Wavelet algorithms for deblurring modelsINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2004Michael K. Ng Abstract Blur removal is an important problem in signal and image processing. In this article, we formulate the deblurring problem within a wavelet framework and design a methodology to find deblurring filters. Using these deblurring filters, we derive an iterative deblurring algorithm and prove its convergence. Simulation results are reported to illustrate the proposed framework and methodology. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 113,121, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20014 [source] A stratified first order logic approach for access controlINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 9 2004Salem Benferhat Modeling information security policies is an important problem in many domains. This is particularly true in the health care sector, where information systems often manage sensitive and critical data. This article proposes to use nonmonotonic reasoning systems to control access to sensitive data in accordance with a security policy. In the first part of the article, we propose an access control model that overcomes several limitations of existing systems. In particular, it allows us to deal with contexts and to represent the two main kinds of privileges: permissions and prohibitions. This model will then be formally encoded using stratified (or prioritized) first-order knowledge bases. In the second part of the article, we discuss the problem of conflicts due to the joint handling of permissions and prohibitions. We show that approaches proposed for solving conflicts in propositional knowledge bases are not appropriate for handling inconsistent first-order knowledge bases. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 817,836, 2004. [source] The generalizability of the Buss,Perry Aggression QuestionnaireINTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 3 2007József Gerevich Abstract Aggressive and hostile behaviours and anger constitute an important problem across cultures. The Buss,Perry Aggression Questionnaire (AQ), a self-rating scale was published in 1992, and has quickly become the gold-standard for the measurement of aggression. The AQ scale has been validated extensively, but the validation focused on various narrowly selected populations, typically, on samples of college students. Individuals, however, who are at risk of displaying aggressive and hostile behaviours may come from a more general population. Therefore, it is important to investigate the scale's properties in such a population. The objective of this study was to examine the factorial structure and the psychometric properties of the AQ scale in a nationally representative sample of the Hungarian adult population. A representative sample of 1200 subjects was selected by a two-step procedure. The dimensionality and factorial composition of the AQ scale was investigated by exploratory and confirmatory factor analyses. Since spurious associations and increased factorial complexity can occur when the analysis fails to consider the inherently categorical nature of the item level data, this study, in contrast to most previous studies, estimated the correlation matrices subjected to factor analysis using the polychoric correlations. The resulting factors were validated via sociodemographic characteristics and psychopathological scales obtained from the respondents. The results showed that based on the distribution of factor loadings and factor correlations, in the entire nationally representative sample of 1200 adult subjects, from the original factor structure three of the four factors (Physical and Verbal Aggression and Hostility) showed a good replication whereas the fourth factor (Anger) replicated moderately well. Replication further improved when the sample was restricted in age, i.e. the analysis focused on a sample representing the younger age group, comparable to that used in the original Buss,Perry study. Similar to the Buss,Perry study, and other investigations of the AQ scale, younger age and male gender were robustly related to physical aggression. In addition, level of verbal aggression was different between the two genders (with higher severity in males) whereas hostility and anger were essentially the same in both genders. In conclusion, the current study based on a representantive sample of adult population lends support to the use of the AQ scale in the general population. The authors suggest to exclude from the AQ the two inverse items because of the low reliability of these items with regard to their hypothesized constructs. Copyright © 2007 John Wiley & Sons, Ltd. [source] Very large-scale neighborhood searchINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 4-5 2000R.K. Ahuja Abstract Neighborhood search algorithms are often the most effective approaches available for solving partitioning problems, a difficult class of combinatorial optimization problems arising in many application domains including vehicle routing, telecommunications network design, parallel machine scheduling, location theory, and clustering. A critical issue in the design of a neighborhood search algorithm is the choice of the neighborhood structure, that is, the manner in which the neighborhood is defined. Currently, the two-exchange neighborhood is the most widely used neighborhood for solving partitioning problems. The paper describes the cyclic exchange neighborhood, which is a generalization of the two-exchange neighborhood in which a neighbor is obtained by performing a cyclic exchange. The cyclic exchange neighborhood has substantially more neighbors compared to the two-exchange neighborhood. This paper outlines a network optimization based methodology to search the neighborhood efficiently and presents a proof of concept by applying it to the capacitated minimum spanning tree problem, an important problem in telecommunications network design. [source] A Chemical Approach Towards Understanding the Mechanism and Reversal of Drug Resistance in Plasmodium falciparum: Is it Viable?IUBMB LIFE, Issue 4-5 2002Kelly Chibale Abstract Genetic and biochemical approaches to studies of drug resistance mechanisms in Plasmodium falciparum have raised controversies and contradictions over the past several years. A different and novel chemical approach to this important problem is desirable at this point in time. Recently, the molecular basis of drug resistance in P. falciparum has been associated with mutations in the resistance genes, Chloroquine Resistance Transporter (PfCRT) and the P-glycoprotein homologue (Pgh1). Although not the determinant of chloroquine resistance in P. falciparum, mutations in Pgh1 have important implications for resistance to other antimalarial drugs. Because it is mutations in the aforementioned resistance genes rather than overexpression that has been associated with drug resistance in malaria, studies on mechanisms of drug resistance and its reversal by chemosensitisers should benefit from a chemical approach. Target-oriented organic synthesis of chemosensitisers against proteins implicated in drug resistance in malaria should shed light on mechanism of drug resistance and its reversal in this area. The effect of structurally diverse chemosensitisers should be examined on several putative resistance genes in P. falciparum to deal with antimalarial drug resistance in the broadest sense. Therefore, generating random mutations of these resistance proteins and subsequent screening in search of a specific phenotype followed by a search for mutations and/or chemosensitisers that affect a specific drug resistance pathway might be a viable strategy. This diversity-oriented organic synthesis approach should offer the means to simultaneously identify resistance proteins that can serve as targets for therapeutic intervention (therapeutic target validation) and chemosensitisers that modulate the functions of these proteins (chemical target validation). [source] Effects of Deficit Irrigation and Salinity Stress on Common Bean (Phaseolus Vulgaris L.) and Mungbean (Vigna Radiata (L.) Wilczek) Grown in a Controlled EnvironmentJOURNAL OF AGRONOMY AND CROP SCIENCE, Issue 4 2010M. Bourgault Abstract As water for irrigation purposes becomes increasingly scarce because of climate change and population growth, there is growing interest in regulated deficit irrigation (RDI) as a way to improve efficiency of water usage and farm productivity in arid and semi-arid areas. Salinity is also becoming an important problem in these same regions. Experiments were performed to investigate the effects of RDI and salt stress on two legumes crops, common bean (Phaseolus vulgaris L.) and mungbean (Vigna radiata (L.) Wilczek); previous work showed contrasting responses to RDI by these two crops under field conditions. The seed and biomass yields of both crops were reduced as a result of increasing water deficit stress; however, mungbean was able to maintain the same proportion of its biomass in reproductive structures and maintain its harvest index under stress, whereas common bean's decreased. In addition, photosynthesis in mungbean was higher than in common bean and higher at the same levels of transpiration. Finally, salinity stress did not affect the water potential, harvest index or the specific leaf weight of either crop. There were no interactions between salinity and crops or RDI levels, which suggest that the two crops do not differ in their response to salinity stress, and that RDI levels do not modify this response. [source] A new maximum entropy-based method for deconvolution of spectra with heteroscedastic noiseJOURNAL OF CHEMOMETRICS, Issue 12 2004Bĺrd Buttingsrud Abstract Broadening of spectral lines combined with large and heteroscedastic noise contributions constitutes an important problem in analytical chemistry. Reduced interpretability and artefacts in further data analysis make deconvolution methods necessary. A new robust deconvolution method (RHEMEM) based on the principle of maximum entropy is proposed in order to effectively handle the presence of heteroscedastic noise. Other deconvolution methods such as Jansson's method, Fourier self-deconvolution and LOMEP are also studied with respect to their ability to handle heteroscedastic noise. A systematic simulation study is used to compare the performance of the new method with the reference methods. They are evaluated according to reconstruction performance, robustness and the ability to work without manual input. Copyright © 2005 John Wiley & Sons, Ltd. [source] Unified QSAR & network-based computational chemistry approach to antimicrobials.JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 1 2010Abstract In the previous work, we reported a multitarget Quantitative Structure-Activity Relationship (mt-QSAR) model to predict drug activity against different fungal species. This mt-QSAR allowed us to construct a drug,drug multispecies Complex Network (msCN) to investigate drug,drug similarity (González-Díaz and Prado-Prado, J Comput Chem 2008, 29, 656). However, important methodological points remained unclear, such as follows: (1) the accuracy of the methods when applied to other problems; (2) the effect of the distance type used to construct the msCN; (3) how to perform the inverse procedure to study species,species similarity with multidrug resistance CNs (mdrCN); and (4) the implications and necessary steps to perform a substructural Triadic Census Analysis (TCA) of the msCN. To continue the present series with other important problem, we developed here a mt-QSAR model for more than 700 drugs tested in the literature against different parasites (predicting antiparasitic drugs). The data were processed by Linear Discriminate Analysis (LDA) and the model classifies correctly 93.62% (1160 out of 1239 cases) in training. The model validation was carried out by means of external predicting series; the model classified 573 out of 607, that is, 94.4% of cases. Next, we carried out the first comparative study of the topology of six different drug,drug msCNs based on six different distances such as Euclidean, Chebychev, Manhattan, etc. Furthermore, we compared the selected drug,drug msCN and species,species mdsCN with random networks. We also introduced here the inverse methodology to construct species,species msCN based on a mt-QSAR model. Last, we reported the first substructural analysis of drug,drug msCN using Triadic Census Analysis (TCA) algorithm. © 2009 Wiley Periodicals, Inc. J Comput Chem 2010 [source] Using grey dynamic modeling and pseudo amino acid composition to predict protein structural classesJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 12 2008Xuan Xiao Abstract Using the pseudo amino acid (PseAA) composition to represent the sample of a protein can incorporate a considerable amount of sequence pattern information so as to improve the prediction quality for its structural or functional classification. However, how to optimally formulate the PseAA composition is an important problem yet to be solved. In this article the grey modeling approach is introduced that is particularly efficient in coping with complicated systems such as the one consisting of many proteins with different sequence orders and lengths. On the basis of the grey model, four coefficients derived from each of the protein sequences concerned are adopted for its PseAA components. The PseAA composition thus formulated is called the "grey-PseAA" composition that can catch the essence of a protein sequence and better reflect its overall pattern. In our study we have demonstrated that introduction of the grey-PseAA composition can remarkably enhance the success rates in predicting the protein structural class. It is anticipated that the concept of grey-PseAA composition can be also used to predict many other protein attributes, such as subcellular localization, membrane protein type, enzyme functional class, GPCR type, protease type, among many others. © 2008 Wiley Periodicals, Inc. J Comput Chem 2008. [source] Measuring the Benefits of Examinee-Selected QuestionsJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2005Nancy L. Allen Allowing students to choose the question(s) that they will answer from among several possible alternatives is often viewed as a mechanism for increasing fairness in certain types of assessments. The fairness of optional topic choice is not a universally accepted fact, however, and various studies have been done to assess this question. We examine an important class of experiments that we call C1-A, "choose one, answer all," designs, and point out an important problem that they face. We suggest two analytical methods that can be used to circumvent this problem. We illustrate our ideas using the data from Bridgeman et al. (1997). Our reanalysis of these data show: (a) that differential topic difficulty exists in real choice data, (b) that it affects naďve analyses of such data and masks the effects, positive or negative, of examinee choice, (c) that in this study there is a measurable and positive effect of examinee choice that follows predicted patterns in most but not all cases, (d) that the beneficial strength of examinee choice varies from case to case, and (e) that while the benefits of choice in terms of average points scored on the essays are usually positive, there is a substantial amount of variation around these averages and it is not uncommon for "incorrect" choices to be associated with higher test performance. [source] Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transformJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2009Martin Magnusson We propose a new approach to appearance-based loop detection for mobile robots, using three-dimensional (3D) laser scans. Loop detection is an important problem in the simultaneous localization and mapping (SLAM) domain, and, because it can be seen as the problem of recognizing previously visited places, it is an example of the data association problem. Without a flat-floor assumption, two-dimensional laser-based approaches are bound to fail in many cases. Two of the problems with 3D approaches that we address in this paper are how to handle the greatly increased amount of data and how to efficiently obtain invariance to 3D rotations. We present a compact representation of 3D point clouds that is still discriminative enough to detect loop closures without false positives (i.e., detecting loop closure where there is none). A low false-positive rate is very important because wrong data association could have disastrous consequences in a SLAM algorithm. Our approach uses only the appearance of 3D point clouds to detect loops and requires no pose information. We exploit the normal distributions transform surface representation to create feature histograms based on surface orientation and smoothness. The surface shape histograms compress the input data by two to three orders of magnitude. Because of the high compression rate, the histograms can be matched efficiently to compare the appearance of two scans. Rotation invariance is achieved by aligning scans with respect to dominant surface orientations. We also propose to use expectation maximization to fit a gamma mixture model to the output similarity measures in order to automatically determine the threshold that separates scans at loop closures from nonoverlapping ones. We discuss the problem of determining ground truth in the context of loop detection and the difficulties in comparing the results of the few available methods based on range information. Furthermore, we present quantitative performance evaluations using three real-world data sets, one of which is highly self-similar, showing that the proposed method achieves high recall rates (percentage of correctly identified loop closures) at low false-positive rates in environments with different characteristics. © 2009 Wiley Periodicals, Inc. [source] Synthesis and evaluation of 2-, 4-, 5-substituted nitroimidazole-iminodiacetic acid- 99mTc(CO)3 complexes to target hypoxic tumorsJOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 8 2010Madhava B. Mallia Abstract Determination of hypoxia in tumor is an important problem in the clinical management of cancer. Towards this, a series of differently substituted nitroimidazoles, viz. 2-nitro, 4-nitro and 5-nitroimidazole iminodiacetic acid (IDA) derivatives were synthesized and radio-labeled with a [99mTc(CO)3(H2O)3]+ core. The corresponding 185/187Re(CO)3 analogue of 2-nitroimidazole-IDA- 99mTc(CO)3 complex was also prepared and characterized to elucidate the mode of bonding between the ligand and the M(CO)3 core (M=99mTc, 185/187Re). All the three nitroimidazole-IDA- 99mTc(CO)3 complexes could be prepared in over 95% yield determined by HPLC. The three complexes were then evaluated in a suitable animal model bearing tumor. Though the in vivo accumulation of complexes in hypoxic tissue is governed by factors such as lipophilicity, charge, etc., the variation in accumulation in hypoxic tissue, in the present case, could be explained by considering the reported values of single electron reduction potential of the respective nitroimidazoles. Among the three derivatives studied, the 2-nitroimidazole-IDA- 99mTc(CO)3 complex produced the best result followed by the 5-nitroimidazole complex. Copyright © 2010 John Wiley & Sons, Ltd. [source] |