Previous Version (previous + version)

Distribution by Scientific Domains


Selected Abstracts


Lesser Bear: A lightweight process library for SMP computers,scheduling mechanism without a lock operation

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2002
Hisashi Oguma
Abstract We have designed and implemented a lightweight process (thread) library called ,Lesser Bear' for SMP computers. Lesser Bear has thread-level parallelism and high portability. Lesser Bear executes threads in parallel by creating UNIX processes as virtual processors and a memory-mapped file as a huge shared-memory space. To schedule thread in parallel, the shared-memory space has been divided into working spaces for each virtual processor, and a ready queue has been distributed. However the previous version of Lesser Bear sometimes requires a lock operation for dequeueing. We therefore proposed a scheduling mechanism that does not require a lock operation. To achieve this, each divided space forms a link topology through the queues, and we use a lock-free algorithm for the queue operation. This mechanism is applied to Lesser Bear and evaluated by experimental results. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The listener's temperament and perceived tempo and loudness of music

EUROPEAN JOURNAL OF PERSONALITY, Issue 8 2009
Joanna Kantor-MartynuskaArticle first published online: 8 JUL 200
Abstract The relationship between the listener's temperament and perceived magnitude of tempo and loudness of music was studied using the techniques of magnitude production, magnitude estimation scaling and cross-modal matching. Four piano pieces were presented at several levels of tempo and loudness. In Study 1, participants adjusted tempo and loudness of music to their subjective level of comfort. In Study 2, participants estimated these parameters on a numerical scale and matched the length of a line segment to the estimates of these musical features. The results showed significant correlations of selected aspects of perceived tempo with perseveration and endurance as well as of selected aspects of perceived loudness with endurance and emotional reactivity. Perceived tempo and loudness, as measured by magnitude production and cross-modal matching tasks, do not seem to systematically correlate with the six formal characteristics of behaviour distinguished in the most recent version of the Regulative Theory of Temperament (RTT). Additionally, there is some evidence that they are selectively associated with reactivity and activity, the dimensions of a previous version of the RTT. The study extends the methodology of research on music preferences and the stimulatory value of music. Copyright © 2009 John Wiley & Sons, Ltd. [source]


The simulation of heat and water exchange at the land,atmosphere interface for the boreal grassland by the land-surface model SWAP

HYDROLOGICAL PROCESSES, Issue 10 2002
Yeugeniy M. Gusev
Abstract The major goal of this paper is to evaluate the ability of the physically based land surface model SWAP to reproduce heat and water exchange processes that occur in mid-latitude boreal grassland regions characterized by a clear seasonal course of hydrometeorological conditions, deep snow cover, seasonally frozen soil, as well as seasonally mobile and shallow water table depth. A unique set of hydrometeorological data measured over 18 years (1966,83) at the Usadievskiy catchment (grassland) situated in the central part of Valdai Hills (Russia) provides an opportunity to validate the model. To perform such validation in a proper way, SWAP is modified to take into account a shallow water table depth. The new model differs from its previous version mainly in the parameterization of water transfer in a soil column; besides that, it includes soil water,groundwater interaction. A brief description of the new version of SWAP and the results of its validation are presented. Simulations of snow density, snow depth, snow water equivalent, daily snow surface temperature, daily evaporation from snow cover, water yield of snow cover, water table depth, depth of soil freezing and thawing, soil water storage in two layers, daily surface and total runoff from the catchment, and monthly evaporation from the catchment are validated against observations on a long-term basis. The root-mean-square errors (RMSEs) of simulations of soil water storage in the layers of 0,50 cm and 0,100 cm are equal to 16 mm and 24 mm respectively; the relative RMSE of simulated annual total runoff is 16%; the RMSE of daily snow surface temperature is 2·9 °C (the temperature varies from 0 to ,46 °C); the RMSE of maximum snow water equivalent (whose value averaged over 18 years is equal to 147 mm) is 32 mm. Analysis of the results of validation shows that the new version of the model SWAP reproduces the heat and water exchange processes occurring in mid-latitude boreal grassland reasonably well. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Concurrent Q-learning: Reinforcement learning for dynamic goals and environments

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 10 2005
Robert B. Ollington
This article presents a powerful new algorithm for reinforcement learning in problems where the goals and also the environment may change. The algorithm is completely goal independent, allowing the mechanics of the environment to be learned independently of the task that is being undertaken. Conventional reinforcement learning techniques, such as Q-learning, are goal dependent. When the goal or reward conditions change, previous learning interferes with the new task that is being learned, resulting in very poor performance. Previously, the Concurrent Q-Learning algorithm was developed, based on Watkin's Q-learning, which learns the relative proximity of all states simultaneously. This learning is completely independent of the reward experienced at those states and, through a simple action selection strategy, may be applied to any given reward structure. Here it is shown that the extra information obtained may be used to replace the eligibility traces of Watkin's Q-learning, allowing many more value updates to be made at each time step. The new algorithm is compared to the previous version and also to DG-learning in tasks involving changing goals and environments. The new algorithm is shown to perform significantly better than these alternatives, especially in situations involving novel obstructions. The algorithm adapts quickly and intelligently to changes in both the environment and reward structure, and does not suffer interference from training undertaken prior to those changes. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 1037,1052, 2005. [source]


Spin-optimized resonating Hartree-Fock configuration interaction

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2007
Ryo Takeda
Abstract The resonating Hartree-Fock Configuration Interaction (Res HF-CI) method is an efficient tool to investigate complicated strongly correlated systems such as ion-radical systems. In this method, we explore several spin-unrestricted Hartree-Fock (UHF) solutions that are energetically low-lying. However it is difficult to choose the symmetry-broken references appropriately as the site increases. In this study, we present the spin-optimized procedure, which is based on the Löwdin spin-projection method, for the Res HF-CI theory, denoted as SO Res-HF CI. We apply this SO Res-HF CI method to depict the potential curves of typical polyradical systems and compared the computational results using complete-active-space (CAS) CI based on UHF natural orbital (UNO), spin-projected UHF, and the previous version of Res HF-CI. We discuss the relation between computational results and the electronic configurations that are important to cover the electron correlation effects for each system. Further, we apply SO Res-HF CI method for the simple organic radical. In addition, we extend this scheme to the GHF case, and show that the use of GHF as a seed of SO Res-HF CI is desirable for the spin-frustrated systems. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


HATODAS II , heavy-atom database system with potentiality scoring

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2009
Michihiro Sugahara
HATODAS II is the second version of HATODAS (the Heavy-Atom Database System), which suggests potential heavy-atom reagents for the derivatization of protein crystals. The present expanded database contains 3103 heavy-atom binding sites, which is four times more than the previous version. HATODAS II has three new criteria to evaluate the feasibility of the search results: (1) potentiality scoring for the predicted heavy-atom reagents, (2) exclusion of the disordered amino acid residues based on the secondary structure prediction and (3) consideration of the solvent accessibility of amino acid residues from a homology model. In the point mutation option, HATODAS II suggests possible mutation sites into reactive amino acid residues such as Met, Cys and His, on the basis of multiple sequence alignments of homologous proteins. These new features allow the user to make a well informed decision as to the possible heavy-atom derivatization experiments of protein crystals. [source]


Development of new pseudopotential methods: Improved model core potentials for the first-row transition metals

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 9 2003
Christopher C. Lovallo
Abstract We have recently developed new nonrelativistic and scalar-relativistic pseudopotentials for the first-row transition metal and several main-group elements. These improved Model Core Potentials were tested on a variety of transition metal complexes to determine their accuracy in reproducing electronic structures, bond lengths, and harmonic vibrational frequencies with respect to both all-electron reference data as well as experimental data. The new potentials are also compared with the previous model core potentials available for the first-row transition metals. The new potentials do a superior job at reproducing atomic data, reproduce molecular data as well as the previous version, and in conjunction with new main-group pseudopotentials that have L-shell structure of the valence basis set, they are slightly faster. © 2003 Wiley Periodicals, Inc. J Comput Chem 9: 1009,1015, 2003 [source]


Automatic segmentation of the brain and intracranial cerebrospinal fluid in T1 -weighted volume MRI scans of the head, and its application to serial cerebral and intracranial volumetry

MAGNETIC RESONANCE IN MEDICINE, Issue 5 2003
Louis Lemieux
A new fully automatic algorithm for the segmentation of the brain and total intracranial cerebrospinal fluid (CSF) from T1 -weighted volume MRI scans of the head, called Exbrain v.2, is described. The algorithm was developed in the context of serial intracranial volumetry. A brain mask obtained using a previous version of the algorithm forms the basis of the CSF segmentation. Improved brain segmentation is then obtained by iterative tracking of the brain,CSF interface. Gray matter (GM), white matter (WM), and intracranial CSF volumes and probability maps are calculated based on a model of intensity probability distribution (IPD) that includes two partial volume classes: GM-CSF and GM-WM. Accuracy was assessed using the Montreal Neurological Institute's (MNI) digital phantom scan. Reproducibility was assessed using scan pairs from 24 controls and 10 patients with epilepsy. Segmentation overlap with the gold standard was 98% for the brain and 95%, 96%, and 97% for the GM, WM, and total intracranial contents, respectively; CSF overlap was 86%. In the controls, the Bland and Altman coefficient of reliability (CR) was 35.2 cm3 for the total brain volume (TBV) and 29.0 cm3 for the intracranial volume (ICV). Scan-matching reduced CR to 25.2 cm3 and 17.1 cm3 for the TBV and ICV, respectively. For the patients, similar CR values were obtained for the ICV. Magn Reson Med 49:872,884, 2003. © 2003 Wiley-Liss, Inc. [source]


Comparison of the Cobalt Glidescope® video laryngoscope with conventional laryngoscopy in simulated normal and difficult infant airways,

PEDIATRIC ANESTHESIA, Issue 11 2009
MICHELLE WHITE MB ChB DCH FRCA
Summary Aim:, To evaluate the new pediatric Glidescope® (Cobalt GVL® Stat) by assessing the time taken to tracheal intubation under normal and difficult intubation conditions. We hypothesized that the Glidescope® would perform as well as conventional laryngoscopy. Background:, A new pediatric Glidescope® became available in October 2008. It combines a disposable, sterile laryngoscope blade and a reusable video baton. It is narrower and longer than the previous version and is available in a greater range of sizes more appropriate to pediatric use. Methods:, We performed a randomized study of 32 pediatric anesthetists and intensivists to compare the Cobalt GVL® Stat with the Miller laryngoscope under simulated normal and difficult airway conditions in a pediatric manikin. Results:, We found no difference in time taken to tracheal intubation using the Glidescope® or Miller laryngoscope under normal (29.3 vs 26.2 s, P = 0.36) or difficult (45.8 and 44.4 s, P = 0.84) conditions. Subjective evaluation of devices for field of view (excellent: 59% vs 53%) and ease of use (excellent: 69% vs 63%) was similar for the Miller laryngoscope and Glidescope®, respectively. However, only 34% of participants said that they would definitely use the Glidescope® in an emergency compared with 66% who would be willing to use the Miller laryngoscope. Conclusions:, The new Glidescope® performs as well as the Miller laryngoscope under simulated normal and difficult airway conditions. [source]


Solving the block,Toeplitz least-squares problem in parallel

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2005
P. Alonso
Abstract In this paper we present two versions of a parallel algorithm to solve the block,Toeplitz least-squares problem on distributed-memory architectures. We derive a parallel algorithm based on the seminormal equations arising from the triangular decomposition of the product TTT. Our parallel algorithm exploits the displacement structure of the Toeplitz-like matrices using the Generalized Schur Algorithm to obtain the solution in O(mn) flops instead of O(mn2) flops of the algorithms for non-structured matrices. The strong regularity of the previous product of matrices and an appropriate computation of the hyperbolic rotations improve the stability of the algorithms. We have reduced the communication cost of previous versions, and have also reduced the memory access cost by appropriately arranging the elements of the matrices. Furthermore, the second version of the algorithm has a very low spatial cost, because it does not store the triangular factor of the decomposition. The experimental results show a good scalability of the parallel algorithm on two different clusters of personal computers. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Cattell,Horn,Carroll abilities and cognitive tests: What we've learned from 20 years of research,

PSYCHOLOGY IN THE SCHOOLS, Issue 7 2010
Timothy Z. Keith
This article reviews factor-analytic research on individually administered intelligence tests from a Cattell,Horn,Carroll (CHC) perspective. Although most new and revised tests of intelligence are based, at least in part, on CHC theory, earlier versions generally were not. Our review suggests that whether or not they were based on CHC theory, the factors derived from both new and previous versions of most tests are well explained by the theory. Especially useful for understanding the theory and tests are cross-battery analyses using multiple measures from multiple instruments. There are issues that need further explanation, of course, about CHC theory and tests derived from that theory. We address a few of these issues including those related to comprehension,knowledge (Gc) and memory factors, as well as issues related to factor retention in factor analysis. © 2010 Wiley Periodicals, Inc. [source]