Previous Methods (previous + methods)

Distribution by Scientific Domains


Selected Abstracts


A practical approach for estimating illumination distribution from shadows using a single image

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2005
Taeone Kim
Abstract This article presents a practical method that estimates illumination distribution from shadows using only a single image. The shadows are assumed to be cast on a textured, Lambertian surface by an object of known shape. Previous methods for illumination estimation from shadows usually require that the reflectance property of the surface on which shadows are cast be constant or uniform, or need an additional image to cancel out the effects of varying albedo of the textured surface on illumination estimation. But, our method deals with an estimation problem for which surface albedo information is not available. In this case, the estimation problem corresponds to an underdetermined one. We show that the combination of regularization by correlation and some user-specified information can be a practical method for solving the underdetermined problem. In addition, as an optimization tool for solving the problem, we develop a constrained Non-Negative Quadratic Programming (NNQP) technique into which not only regularization but also multiple linear constraints induced by user-specified information are easily incorporated. We test and validate our method on both synthetic and real images and present some experimental results. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 143,154, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20047 [source]


A test of the metapopulation model of the species,area relationship

JOURNAL OF BIOGEOGRAPHY, Issue 8 2002
Stephen F. Matter
Abstract Aim The species,area relationship is a ubiquitous pattern. Previous methods describing the relationship have done little to elucidate mechanisms producing the pattern. Hanski & Gyllenberg (Science, 1997, 275, 397) have shown that a model of metapopulation dynamics yields predictable species,area relationships. We elaborate on the biological interpretation of this mechanistic model and test the prediction that communities of species with a higher risk of extinction caused by environmental stochasticity should have lower species,area slopes than communities experiencing less impact of environmental stochasticity. Methods We develop the mainland,island version of the metapopulation model and show that the slope of the species,area relationship resulting from this model is related to the ratio of population growth rate to variability in population growth of individual species. We fit the metapopulation model to five data sets, and compared the fit with the power function model and Williams's (Ecology, 1995, 76, 2607) extreme value function model. To test that communities consisting of species with a high risk of extinction should have lower slopes, we used the observation that small-bodied species of vertebrates are more susceptible to environmental stochasticity than large-bodied species. The data sets were divided into small and large bodied species and the model fit to both. Results and main conclusions The metapopulation model showed a good fit for all five data sets, and was comparable with the fits of the extreme value function and power function models. The slope of the metapopulation model of the species,area relationship was greater for larger than for smaller-bodied species for each of five data sets. The slope of the metapopulation model of the species,area relationship has a clear biological interpretation, and allows for interpretation that is rooted in ecology, rather than ad hoc explanation. [source]


How to Analyze Political Attention with Minimal Assumptions and Costs

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 1 2010
Kevin M. Quinn
Previous methods of analyzing the substance of political attention have had to make several restrictive assumptions or been prohibitively costly when applied to large-scale political texts. Here, we describe a topic model for legislative speech, a statistical learning model that uses word choices to infer topical categories covered in a set of speeches and to identify the topic of specific speeches. Our method estimates, rather than assumes, the substance of topics, the keywords that identify topics, and the hierarchical nesting of topics. We use the topic model to examine the agenda in the U.S. Senate from 1997 to 2004. Using a new database of over 118,000 speeches (70,000,000 words) from the Congressional Record, our model reveals speech topic categories that are both distinctive and meaningfully interrelated and a richer view of democratic agenda dynamics than had previously been possible. [source]


Testing for Genetic Association With Constrained Models Using Triads

ANNALS OF HUMAN GENETICS, Issue 2 2009
J. F. Troendle
Summary It has been shown that it is preferable to use a robust model that incorporated constraints on the genotype relative risk rather than rely on a model that assumes the disease operates in a recessive or dominant fashion. Previous methods are applicable to case-control studies, but not to family based studies of case children along with their parents (triads). We show here how to implement analogous constraints while analyzing triad data. The likelihood, conditional on the parents genotype, is maximized over the appropriately constrained parameter space. The asymptotic distribution for the maximized likelihood ratio statistic is found and used to estimate the null distribution of the test statistics. The properties of several methods of testing for association are compared by simulation. The constrained method provides higher power across a wide range of genetic models with little cost when compared to methods that restrict to a dominant, recessive, or multiplicative model, or make no modeling restriction. The methods are applied to two SNPs on the methylenetetrahydrofolate reductase (MTHFR) gene with neural tube defect (NTD) triads. [source]


A smooth and differentiable bulk-solvent model for macromolecular diffraction

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2010
T. D. Fenn
Inclusion of low-resolution data in macromolecular crystallography requires a model for the bulk solvent. Previous methods have used a binary mask to accomplish this, which has proven to be very effective, but the mask is discontinuous at the solute,solvent boundary (i.e. the mask value jumps from zero to one) and is not differentiable with respect to atomic parameters. Here, two algorithms are introduced for computing bulk-solvent models using either a polynomial switch or a smoothly thresholded product of Gaussians, and both models are shown to be efficient and differentiable with respect to atomic coordinates. These alternative bulk-solvent models offer algorithmic improvements, while showing similar agreement of the model with the observed amplitudes relative to the binary model as monitored using R, Rfree and differences between experimental and model phases. As with the standard solvent models, the alternative models improve the agreement primarily with lower resolution (>6,Å) data versus no bulk solvent. The models are easily implemented into crystallographic software packages and can be used as a general method for bulk-solvent correction in macromolecular crystallography. [source]


Differentiating Premenstrual Dysphoric Disorder From Premenstrual Exacerbations of Other Disorders: A Methods Dilemma

CLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE, Issue 2 2001
Shirley Ann Hartlage
Premenstrual dysphoric disorder (PMDD) and premenstrual exacerbations of other disorders are difficult to distinguish. Previous methods, such as excluding women with other disorders from a PMDD diagnosis, do not enable a dual diagnosis. Our objective is to advance conceptual and clinical thinking and stimulate dialogue regarding this methods dilemma. The discussion sheds light on comorbidity in general, regardless of the disorders. Considering fundamental criteria for severe premenstrual disorders helps distinguish the phenomena of interest. A proposed method allows identification of PMDD co-occurring with other disorders. PMDD symptoms can be differentiated by their nature and timing (e.g., cyclic depressed mood could be a PMDD symptom, but cyclic binge eating or depressed mood all month long could not be). Impairment must increase premenstrually for a PMDD diagnosis. The proposed method is an advance, but specified unanswered questions remain. [source]


An improved study of real-time fluid simulation on GPU

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
Enhua Wu
Abstract Taking advantage of the parallelism and programmability of GPU, we solve the fluid dynamics problem completely on GPU. Different from previous methods, the whole computation is accelerated in our method by packing the scalar and vector variables into four channels of texels. In order to be adaptive to the arbitrary boundary conditions, we group the grid nodes into different types according to their positions relative to obstacles and search the node that determines the value of the current node. Then we compute the texture coordinates offsets according to the type of the boundary condition of each node to determine the corresponding variables and achieve the interaction of flows with obstacles set freely by users. The test results prove the efficiency of our method and exhibit the potential of GPU for general-purpose computations. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Sparsely Precomputing The Light Transport Matrix for Real-Time Rendering

COMPUTER GRAPHICS FORUM, Issue 4 2010
Fu-Chung Huang
Precomputation-based methods have enabled real-time rendering with natural illumination, all-frequency shadows, and global illumination. However, a major bottleneck is the precomputation time, that can take hours to days. While the final real-time data structures are typically heavily compressed with clustered principal component analysis and/or wavelets, a full light transport matrix still needs to be precomputed for a synthetic scene, often by exhaustive sampling and raytracing. This is expensive and makes rapid prototyping of new scenes prohibitive. In this paper, we show that the precomputation can be made much more efficient by adaptive and sparse sampling of light transport. We first select a small subset of "dense vertices", where we sample the angular dimensions more completely (but still adaptively). The remaining "sparse vertices" require only a few angular samples, isolating features of the light transport. They can then be interpolated from nearby dense vertices using locally low rank approximations. We demonstrate sparse sampling and precomputation 5 × faster than previous methods. [source]


Fast and Efficient Skinning of Animated Meshes

COMPUTER GRAPHICS FORUM, Issue 2 2010
L. Kavan
Abstract Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre-processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre-processing time than previous methods. [source]


Fast Inverse Reflector Design (FIRD)

COMPUTER GRAPHICS FORUM, Issue 8 2009
A. Mas
I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling , Physically based modeling; I.3.1 [Hardware architecture]: Graphics processors Abstract This paper presents a new inverse reflector design method using a GPU-based computation of outgoing light distribution from reflectors. We propose a fast method to obtain the outgoing light distribution of a parametrized reflector, and then compare it with the desired illumination. The new method works completely in the GPU. We trace millions of rays using a hierarchical height-field representation of the reflector. Multiple reflections are taken into account. The parameters that define the reflector shape are optimized in an iterative procedure in order for the resulting light distribution to be as close as possible to the desired, user-provided one. We show that our method can calculate reflector lighting at least one order of magnitude faster than previous methods, even with millions of rays, complex geometries and light sources. [source]


Scalable, Versatile and Simple Constrained Graph Layout

COMPUTER GRAPHICS FORUM, Issue 3 2009
Tim Dwyer
Abstract We describe a new technique for graph layout subject to constraints. Compared to previous techniques the proposed method is much faster and scalable to much larger graphs. For a graph with n nodes, m edges and c constraints it computes incremental layout in time O(n log n+m+c) per iteration. Also, it supports a much more powerful class of constraint: inequalities or equalities over the Euclidean distance between nodes. We demonstrate the power of this technique by application to a number of diagramming conventions which previous constrained graph layout methods could not support. Further, the constraint-satisfaction method,inspired by recent work in position-based dynamics,is far simpler to implement than previous methods. [source]


Streaming Surface Reconstruction Using Wavelets

COMPUTER GRAPHICS FORUM, Issue 5 2008
J. Manson
Abstract We present a streaming method for reconstructing surfaces from large data sets generated by a laser range scanner using wavelets. Wavelets provide a localized, multiresolution representation of functions and this makes them ideal candidates for streaming surface reconstruction algorithms. We show how wavelets can be used to reconstruct the indicator function of a shape from a cloud of points with associated normals. Our method proceeds in several steps. We first compute a low-resolution approximation of the indicator function using an octree followed by a second pass that incrementally adds fine resolution details. The indicator function is then smoothed using a modified octree convolution step and contoured to produce the final surface. Due to the local, multiresolution nature of wavelets, our approach results in an algorithm over 10 times faster than previous methods and can process extremely large data sets in the order of several hundred million points in only an hour. [source]


Integrating species life-history traits and patterns of deforestation in amphibian conservation planning

DIVERSITY AND DISTRIBUTIONS, Issue 1 2010
C. G. Becker
Abstract Aim, To identify priority areas for amphibian conservation in southeastern Brazil, by integrating species life-history traits and patterns of deforestation. Location, State of São Paulo, Brazil. Methods, We used the software Marxan to evaluate different scenarios of amphibian conservation planning. Our approach differs from previous methods by explicitly including two different landscape metrics; habitat split for species with aquatic larvae, and habitat loss for species with terrestrial development. We evaluated the effect of habitat requirements by classifying species breeding habitats in five categories (flowing water, still water permanent, still water temporary, bromeliad or bamboo, and terrestrial). We performed analyses using two scales, grid cells and watersheds and also considered nature preserves as protected areas. Results, We found contrasting patterns of deforestation between coastal and inland regions. Seventy-six grid cells and 14 watersheds are capable of representing each species at least once. When accounting for grid cells already protected in state and national parks and considering species habitat requirements we found 16 high-priority grid cells for species with one or two reproductive habitats, and only one cell representing species with four habitat requirements. Key areas for the conservation of species breeding in flowing and permanent still waters are concentrated in southern state, while those for amphibians breeding in temporary ponds are concentrated in central to eastern zones. Eastern highland zones are key areas for preserving species breeding terrestrially by direct or indirect development. Species breeding in bromeliads and bamboos are already well represented in protected areas. Main conclusions, Our results emphasize the need to integrate information on landscape configuration and species life-history traits to produce more ecologically relevant conservation strategies. [source]


Post-earthquake bridge repair cost and repair time estimation methodology

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2010
Kevin R. Mackie
Abstract While structural engineers have traditionally focused on individual components (bridges, for example) of transportation networks for design, retrofit, and analysis, it has become increasingly apparent that the economic costs to society after extreme earthquake events are caused at least as much from indirect costs as direct costs due to individual structures. This paper describes an improved methodology for developing probabilistic estimates of repair costs and repair times that can be used for evaluating the performance of new bridge design options and existing bridges in preparation for the next major earthquake. The proposed approach in this paper is an improvement on previous bridge loss modeling studies,it is based on the local linearization of the dependence between repair quantities and damage states so that the resulting model follows a linear relationship between damage states and repair points. The methodology uses the concept of performance groups (PGs) that account for damage and repair of individual bridge components and subassemblies. The method is validated using two simple examples that compare the proposed method to simulation and previous methods based on loss models using a power,law relationship between repair quantities and damage. In addition, an illustration of the method is provided for a complete study on the performance of a common five-span overpass bridge structure in California. Intensity-dependent repair cost ratios (RCRs) and repair times are calculated using the proposed approach, as well as plots that show the disaggregation of repair cost by repair quantity and by PG. This provides the decision maker with a higher fidelity of data when evaluating the contribution of different bridge components to the performance of the bridge system, where performance is evaluated in terms of repair costs and repair times rather than traditional engineering quantities such as displacements and stresses. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Sensitivity analysis of transient population dynamics

ECOLOGY LETTERS, Issue 1 2007
Hal Caswell
Abstract Short-term, transient population dynamics can differ in important ways from long-term asymptotic dynamics. Just as perturbation analysis (sensitivity and elasticity) of the asymptotic growth rate reveals the effects of the vital rates on long-term growth, the perturbation analysis of transient dynamics can reveal the determinants of short-term patterns. In this article, I present a completely new approach to transient sensitivity and elasticity analysis, using methods from matrix calculus. Unlike previous methods, this approach applies not only to linear time-invariant models but also to time-varying, subsidized, stochastic, nonlinear and spatial models. It is computationally simple, and does not require calculation of eigenvalues or eigenvectors. The method is presented along with applications to plant and animal populations. [source]


The validation of some methods of notch fatigue analysis

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 5 2000
Taylor
This paper is concerned with the testing and validation of certain methods of notch analysis which the authors have developed theoretically in earlier publications. These methods were developed for use with finite element (FE) analysis in order to predict the fatigue limits of components containing stress concentrations. In the present work we tested and compared these methods using data from standard notches taken from the literature, covering a range of notch geometries, loading types, R -ratios and materials: a total of 47 different data sets were analysed. The greatest predictive success was achieved with critical-distance methods known as the point, line and area methods: 94% of these predictions fell within 20% of the experimental fatigue limits. This was a significant improvement on previous methods of this kind, e.g. that of Klesnil and Lucas [(1980) Fatigue of Metallic Materials, Elsevier Science]. Methods based on the Smith and Miller [(1978) Int. J. Mech. Sci. 20, 201,206] concept of crack-like notches were successful in 42% of cases; they experienced difficulties dealing with very small notches, and could be improved by using an ElHaddad-type correction factor, giving 87% success. An approach known as ,crack modelling' allowed the Smith and Miller method to be used with non-standard stress concentrations, where notch geometry is ill defined; this modification, with the same short-crack correction, had 68% success. It was concluded that the critical-distance approach is more accurate and can be more easily used to analyse components of complex shape, however, the crack modelling approach is sometimes preferable because it can be used with less mesh refinement. [source]


Using evidence for population stratification bias in combined individual- and family-level genetic association analyses of quantitative traits

GENETIC EPIDEMIOLOGY, Issue 5 2010
Lucia Mirea
Abstract Genetic association studies are generally performed either by examining differences in the genotype distribution between individuals or by testing for preferential allele transmission within families. In the absence of population stratification bias (PSB), integrated analyses of individual and family data can increase power to identify susceptibility loci [Abecasis et al., 2000. Am. J. Hum. Genet. 66:279,292; Chen and Lin, 2008. Genet. Epidemiol. 32:520,527; Epstein et al., 2005. Am. J. Hum. Genet. 76:592,608]. In existing methods, the presence of PSB is initially assessed by comparing results from between-individual and within-family analyses, and then combined analyses are performed only if no significant PSB is detected. However, this strategy requires specification of an arbitrary testing level ,PSB, typically 5%, to declare PSB significance. As a novel alternative, we propose to directly use the PSB evidence in weights that combine results from between-individual and within-family analyses. The weighted approach generalizes previous methods by using a continuous weighting function that depends only on the observed P -value instead of a binary weight that depends on ,PSB. Using simulations, we demonstrate that for quantitative trait analysis, the weighted approach provides a good compromise between type I error control and power to detect association in studies with few genotyped markers and limited information regarding population structure. Genet. Epidemiol. 34: 502,511, 2010. © 2010 Wiley-Liss, Inc. [source]


A Genetic Approach to Detecting Clusters in Point Data Sets

GEOGRAPHICAL ANALYSIS, Issue 3 2005
Jamison Conley
Spatial analysis techniques are widely used throughout geography. However, as the size of geographic data sets increases exponentially, limitations to the traditional methods of spatial analysis become apparent. To overcome some of these limitations, many algorithms for exploratory spatial analysis have been developed. This article presents both a new cluster detection method based on a genetic algorithm, and Programs for Cluster Detection, a toolkit application containing the new method as well as implementations of three established methods: Openshaw's Geographical Analysis Machine (GAM), case point-centered searching (proposed by Besag and Newell), and randomized GAM (proposed by Fotheringham and Zhan). We compare the effectiveness of cluster detection and the runtime performance of these four methods and Kulldorf's spatial scan statistic on a synthetic point data set simulating incidence of a rare disease among a spatially variable background population. The proposed method has faster average running times than the other methods and significantly reduces overreporting of the underlying clusters, thus reducing the user's postprocessing burden. Therefore, the proposed method improves upon previous methods for automated cluster detection. The results of our method are also compared with those of Map Explorer (MAPEX), a previous attempt to develop a genetic algorithm for cluster detection. The results of these comparisons indicate that our method overcomes many of the problems faced by MAPEX, thus, we believe, establishing that genetic algorithms can indeed offer a viable approach to cluster detection. [source]


Precise/ Small Sample Size Determinations of Lithium Isotopic Compositions of Geological Reference Materials and Modern Seawater by MC-ICP-MS

GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 1 2004
Alistair B. Jeffcoate
composition isotopique de Li; matériaux de référence silicates; eau de mer; MC-ICP-MS; Li standard The Li isotope ratios of four international rock reference materials, USGS BHVO-2, GSJ JB-2, JG-2, JA-1 and modern seawater (Mediterranean, Pacific and North Atlantic) were determined using multi-collector inductively coupled plasma-mass spectrometry (MC-ICP-MS). These reference materials of natural samples were chosen to span a considerable range in Li isotope ratios and cover several different matrices in order to provide a useful benchmark for future studies. Our new analytical technique achieves significantly higher precision and reproducibility (< ± O.3%o 2s) than previous methods, with the additional advantage of requiring very low sample masses of ca. 2 ng of Li. Les rapports isotopiques du Li de 4 matériaux de référence, de provenance Internationale, BHVO-2, JB-2, JG-2, JA-1 et d'eau de mer (Méditerranée, Pacifique et Atlantique Nord) ont été déterminés par MC-ICP-MS (spectrométrie de masse avec source à plasma induit à multicollection). Ces matériaux de référence naturels ont été choisis car ils balaient un large champ des rapports isotopiques du Lithium et couvrent différentes matrices afin de fournir un point de repère utile pour les études futures. Notre nouvelle technique analytique permet d'atteindre une précision et une reproductibilité (< ± 0.3%. 2s) nettement supérieures à celles des méthodes précédemment utilisées et présente I'avantage de pouvoir travailler avec des échantillons de petite masse, , 2 ng de Li. [source]


Hydrophobic Functional Group Initiated Helical Mesostructured Silica for Controlled Drug Release,

ADVANCED FUNCTIONAL MATERIALS, Issue 23 2008
Lei Zhang
Abstract In this paper a novel one-step synthetic pathway that controls both functionality and morphology of functionalized periodic helical mesostructured silicas by the co-condensation of tetraethoxysilane and hydrophobic organoalkoxysilane using achiral surfactants as templates is reported. In contrast to previous methods, the hydrophobic interaction between hydrophobic functional groups and the surfactant as well as the intercalation of hydrophobic groups into the micelles are proposed to lead to the formation of helical mesostructures. This study demonstrates that hydrophobic interaction and intercalation can promote the production of long cylindrical micelles, and that the formation of helical rod-like morphology is attributed to the spiral transformation from bundles of hexagonally-arrayed and straight rod-like composite micelles due to the reduction in surface free energy. It is also revealed that small amounts of mercaptopropyltrimethoxysilane, vinyltrimethoxysilane, and phenyltrimethoxysilane can cause the formation of helical mesostructures. Furthermore, the helical mesostructured silicas are employed as drug carriers for the release study of the model drug aspirin, and the results show that the drug release rate can be controlled by the morphology and helicity of the materials. [source]


Comparative study between two numerical methods for oxygen diffusion problem

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 8 2009
Vildan GülkaçArticle first published online: 28 APR 200
Abstract Two approximate numerical solutions of the oxygen diffusion problem are defined using three time-level of Crank,Nicolson equation and Gauss,Seidel iteration for three time-level of implicit method. Oxygen diffusion in a sike cell with simultaneous absorption is an important problem and has a wide range of medical applications. The problem is mathematically formulated through two different stages. At the first stage, the stable case having no oxygen transition in the isolated cell is searched, whereas at the second stage the moving boundary problem of oxygen absorbed by the tissues in the cell is searched. The results obtained by three time-level of implicit method and Gauss,Seidel iteration for three time-level of implicit method and the results gave a good agreement with the previous methods (J. Inst. Appl. Math. 1972; 10:19,33; 1974; 13:385,398; 1978; 22:467,477). Copyright © 2008 John Wiley & Sons, Ltd. [source]


A study on the lumped preconditioner and memory requirements of FETI and related primal domain decomposition methods

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2008
Yannis Fragakis
Abstract In recent years, domain decomposition methods (DDMs) have emerged as advanced solvers in several areas of computational mechanics. In particular, during the last decade, in the area of solid and structural mechanics, they reached a considerable level of advancement and were shown to be more efficient than popular solvers, like advanced sparse direct solvers. The present contribution follows the lines of a series of recent publications on the relationship between primal and dual formulations of DDMs. In some of these papers, the effort to unify primal and dual methods led to a family of DDMs that was shown to be more efficient than the previous methods. The present paper extends this work, presenting a new family of related DDMs, thus enriching the theory of the relations between primal and dual methods, with the primal methods, which correspond to the dual DDM that uses the lumped preconditioner. The paper also compares the numerical performance of the new methods with that of the previous ones and focuses particularly on memory requirement issues related to the use of the lumped preconditioner, suggesting a particularly memory-efficient formulation. Copyright © 2007 John Wiley & Sons, Ltd. [source]


An improved method of constructing a database of monthly climate observations and associated high-resolution grids

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 6 2005
Timothy D. Mitchell
Abstract A database of monthly climate observations from meteorological stations is constructed. The database includes six climate elements and extends over the global land surface. The database is checked for inhomogeneities in the station records using an automated method that refines previous methods by using incomplete and partially overlapping records and by detecting inhomogeneities with opposite signs in different seasons. The method includes the development of reference series using neighbouring stations. Information from different sources about a single station may be combined, even without an overlapping period, using a reference series. Thus, a longer station record may be obtained and fragmentation of records reduced. The reference series also enables 1961,90 normals to be calculated for a larger proportion of stations. The station anomalies are interpolated onto a 0.5° grid covering the global land surface (excluding Antarctica) and combined with a published normal from 1961,90. Thus, climate grids are constructed for nine climate variables (temperature, diurnal temperature range, daily minimum and maximum temperatures, precipitation, wet-day frequency, frost-day frequency, vapour pressure, and cloud cover) for the period 1901,2002. This dataset is known as CRU TS 2.1 and is publicly available (http://www.cru.uea.ac.uk/). Copyright © 2005 Royal Meteorological Society [source]


A unifying co-operative web caching architecture

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2002
Abdullah Abonamah
Abstract Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co-operation where objects can be cached for later retrieval. A co-operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co-operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co-operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Fourier analysis methodology of trabecular orientation measurement in the human tibial epiphysis

JOURNAL OF ANATOMY, Issue 2 2001
M. HERRERA
Methods to quantify trabecular orientation are crucial in order to assess the exact trajectory of trabeculae in anatomical and histological sections. Specific methods for evaluating trabecular orientation include the ,point counting' technique (Whitehouse, 1974), manual tracing of trabecular outlines on a digitising board (Whitehouse, 1980), textural analysis (Veenland et al. 1998), graphic representation of vectors (Shimizu et al. 1993; Kamibayashi et al. 1995) and both mathematical (Geraets, 1998) and fractal analysis (Millard et al. 1998). Optical and computer-assisted methods to detect trabecular orientation of bone using the Fourier transform were introduced by Oxnard (1982) later refined by Kuo & Carter (1991) (see also Oxnard, 1993, for a review), in the analysis of planar sections of vertebral bodies as well as in planar radiographs of cancellous bone in the distal radius (Wigderowitz et al. 1997). At present no studies have applied this technique to 2-D images or to the study of dried bones. We report a universal computer-automated technique for assessing the preferential orientation of the tibial subarticular trabeculae based on Fourier analysis, emphasis being placed on the search for improvements in accuracy over previous methods and applied to large stereoscopic (2-D) fields of anatomical sections of dried human tibiae. Previous studies on the trajectorial architecture of the tibial epiphysis (Takechi, 1977; Maquet, 1984) and research data about trabecular orientation (Kamibayashi et al. 1995) have not employed Fourier analysis. [source]


On the impact of uncorrelated variation in regression mathematics

JOURNAL OF CHEMOMETRICS, Issue 11-12 2008
Johan Gottfries
Abstract The objective of the present study is to investigate if, and if so, how uncorrelated variation relates to regression mathematics as exemplified by partial least squares (PLS) methodology. In contrast to previous methods, orthogonal partial least squares (OPLS) method requires a multi-focus, in the sense that in parallel to calculation of correlation it requires an analysis of orthogonal variation, i.e. the uncorrelated structure in a comprehensive way. Subsequent to the estimation of the correlation is the remaining orthogonal variation, i.e. uncorrelated data, divided into uncorrelated structure and stochastic noise by the ,OPLS component'. Thus, it appears obvious that it is of interest to understand how the uncorrelated variation can influence the interpretation of the regression model. We have scrutinized three examples that pinpoint additional value from OPLS regarding the modelling of the orthogonal, i.e. uncorrelated, variation in regression mathematics. In agreement with the results, we conclude that uncorrelated variations do impact interpretations of regression analyses output and provides not only opportunities by OPLS but also an obligation for the user to maximize benefit from OPLS. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A tetrahedron approach for a unique closed-form solution of the forward kinematics of six-dof parallel mechanisms with multiconnected joints

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6 2002
Se-Kyong Song
This article presents a new formulation approach that uses tetrahedral geometry to determine a unique closed-form solution of the forward kinematics of six-dof parallel mechanisms with multiconnected joints. For six-dof parallel mechanisms that have been known to have eight solutions, the proposed formulation, called the Tetrahedron Approach, can find a unique closed-form solution of the forward kinematics using the three proposed Tetrahedron properties. While previous methods to solve the forward kinematics involve complicated algebraic manipulation of the matrix elements of the orientation of the moving platform, or closed-loop constraint equations between the moving and the base platforms, the Tetrahedron Approach piles up tetrahedrons sequentially to directly solve the forward kinematics. Hence, it allows significant abbreviation in the formulation and provides an easier systematic way of obtaining a unique closed-form solution. © 2002 Wiley Periodicals, Inc. [source]


Finding a serial burglar's home using distance decay and conditional origin,destination patterns: a test of empirical Bayes journey-to-crime estimation in the Hague

JOURNAL OF INVESTIGATIVE PSYCHOLOGY AND OFFENDER PROFILING, Issue 3 2009
Richard Block
Abstract Can we tell where an offender lives from where he or she commits crimes? Journey-to-crime estimation is a tool that uses crime locations to tell us where to search for a serial offender's home. In this paper, we test a new method: empirical Bayes journey-to-crime estimation. It differs from previous methods because it utilises an ,origin,destination' rule in addition to the ,distance decay' rule that prior methods have used. In the new method, the profiler not only asks ,what distances did previous offenders travel between their home and the crime scenes?' but also ,where did previous offenders live who offended at the locations included in the crime series I investigate right now?'. The new method could not only improve predictive accuracy, it could also reduce the traditional distinction between marauding and commuting offenders. Utilising the CrimeStat software, we apply the new method to 62 serial burglars in The Hague, The Netherlands, and show that the new method has higher predictive accuracy than methods that only exploit a distance decay rule. The new method not only improves the accuracy of predicting the homes of commuters,offenders who live outside their offending area,it also improves the search for marauders,offenders who live inside their offending area. After presenting an example of the application of the technique for prediction of a specific burglar, we discuss the limitations of the method and offer some suggestions for its future development. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Simple and high radiochemical yield synthesis of 2,-Deoxy-2,-[18F]fluorouridine via a new nosylate precursor

JOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 14 2006
Se Hun Kang
Abstract We synthesized 2'-deoxy-2'-[18F]fluorouridine (7) as a radiotracer for positron emission tomography from a new nosylate precursor (6). This new precursor was synthesized from uridine in four steps. The overall synthetic yield was 9.4% and we have high stability of >98% purity up to 6 months at 4°C. The optimal manual [18F]fluorination conditions were 30 mg of the precursor 6 in 500 µl of acetonitrile at 145°C for 15 min with 370 MBq of [18F]fluoride. The [18F]fluorination yield was 76.5±2.7% (n = 3). After hydrolysis of protecting groups with 1 N HCl and purification by HPLC, the overall radiochemical yield and purity were 26.5±1.4% and 98.2±2.5%, respectively. The preparation time was 70.0±10.5 min (n = 3 for each result). We also developed an automated method with a radiochemical yield and purity of 24.0±2.8 and 98.0±1.5% (n = 10) using a GE TracerLab MX chemistry module. This new nosylate precursor for 2'-deoxy-2'-[18F]fluorouridine synthesis showed higher radiochemical yields and reproducibility than previous methods. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Wavelet analysis for detecting anisotropy in point patterns

JOURNAL OF VEGETATION SCIENCE, Issue 2 2004
Michael S. Rosenberg
Although many methods have been proposed for analysing point locations for spatial pattern, previous methods have concentrated on clumping and spacing. The study of anisotropy (changes in spatial pattern with direction) in point patterns has been limited by lack of methods explicitly designed for these data and this purpose; researchers have been constrained to choosing arbitrary test directions or converting their data into quadrat counts and using methods designed for continuously distributed data. Wavelet analysis, a booming approach to studying spatial pattern, widely used in mathematics and physics for signal analysis, has started to make its way into the ecological literature. A simple adaptation of wavelet analysis is proposed for the detection of anisotropy in point patterns. The method is illustrated with both simulated and field data. This approach can easily be used for both global and local spatial analysis. [source]