Home About us Contact | |||
Very High Resolution (very + high_resolution)
Selected AbstractsVery high resolution interpolated climate surfaces for global land areasINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 15 2005Robert J. Hijmans Abstract We developed interpolated climate surfaces for global land areas (excluding Antarctica) at a spatial resolution of 30 arc s (often referred to as 1-km spatial resolution). The climate elements considered were monthly precipitation and mean, minimum, and maximum temperature. Input data were gathered from a variety of sources and, where possible, were restricted to records from the 1950,2000 period. We used the thin-plate smoothing spline algorithm implemented in the ANUSPLIN package for interpolation, using latitude, longitude, and elevation as independent variables. We quantified uncertainty arising from the input data and the interpolation by mapping weather station density, elevation bias in the weather stations, and elevation variation within grid cells and through data partitioning and cross validation. Elevation bias tended to be negative (stations lower than expected) at high latitudes but positive in the tropics. Uncertainty is highest in mountainous and in poorly sampled areas. Data partitioning showed high uncertainty of the surfaces on isolated islands, e.g. in the Pacific. Aggregating the elevation and climate data to 10 arc min resolution showed an enormous variation within grid cells, illustrating the value of high-resolution surfaces. A comparison with an existing data set at 10 arc min resolution showed overall agreement, but with significant variation in some regions. A comparison with two high-resolution data sets for the United States also identified areas with large local differences, particularly in mountainous areas. Compared to previous global climatologies, ours has the following advantages: the data are at a higher spatial resolution (400 times greater or more); more weather station records were used; improved elevation data were used; and more information about spatial patterns of uncertainty in the data is available. Owing to the overall low density of available climate stations, our surfaces do not capture of all variation that may occur at a resolution of 1 km, particularly of precipitation in mountainous areas. In future work, such variation might be captured through knowledge-based methods and inclusion of additional co-variates, particularly layers obtained through remote sensing. Copyright © 2005 Royal Meteorological Society. [source] Sensitivity of multi-coil frequency domain electromagnetic induction sensors to map soil magnetic susceptibilityEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 4 2010D. Simpson Magnetic susceptibility is an important indicator of anthropogenic disturbance in the natural soil. This property is often mapped with magnetic gradiometers in archaeological prospection studies. It is also detected with frequency domain electromagnetic induction (FDEM) sensors, which have the advantage that they can simultaneously measure the electrical conductivity. The detection level of FDEM sensors for magnetic structures is very dependent on the coil configuration. Apart from theoretical modelling studies, a thorough investigation with field models has not been conducted until now. Therefore, the goal of this study was to test multiple coil configurations on a test field with naturally enhanced magnetic susceptibility in the topsoil and with different types of structures mimicking real archaeological features. Two FDEM sensors were used with coil separations between 0.5 and 2 m and with three coil orientations. First, a vertical sounding was conducted over the undisturbed soil to test the validity of a theoretical layered model, which can be used to infer the depth sensitivity of the coil configurations. The modelled sounding values corresponded well with the measured data, which means that the theoretical models are applicable to layered soils. Second, magnetic structures were buried in the site and the resulting anomalies measured to a very high resolution. The results showed remarkable differences in amplitude and complexity between the responses of the coil configurations. The 2-m horizontal coplanar and 1.1-m perpendicular coil configurations produced the clearest anomalies and resembled best a gradiometer measurement. [source] TICL , a web tool for network-based interpretation of compound lists inferred by high-throughput metabolomicsFEBS JOURNAL, Issue 7 2009Alexey V. Antonov High-throughput metabolomics is a dynamically developing technology that enables the mass separation of complex mixtures at very high resolution. Metabolic profiling has begun to be widely used in clinical research to study the molecular mechanisms of complex cell disorders. Similar to transcriptomics, which is capable of detecting genes at differential states, metabolomics is able to deliver a list of compounds differentially present between explored cell physiological conditions. The bioinformatics challenge lies in a statistically valid interpretation of the functional context for identified sets of metabolites. Here, we present TICL, a web tool for the automatic interpretation of lists of compounds. The major advance of TICL is that it not only provides a model of possible compound transformations related to the input list, but also implements a robust statistical framework to estimate the significance of the inferred model. The TICL web tool is freely accessible at http://mips.helmholtz-muenchen.de/proj/cmp. [source] Hypothesis testing in distributed source models for EEG and MEG dataHUMAN BRAIN MAPPING, Issue 2 2006Lourens J. Waldorp Abstract Hypothesis testing in distributed source models for the electro- or magnetoencephalogram is generally performed for each voxel separately. Derived from the analysis of functional magnetic resonance imaging data, such a statistical parametric map (SPM) ignores the spatial smoothing in hypothesis testing with distributed source models. For example, when intending to test a single voxel, actually an entire region of voxels is tested simultaneously. Because there are more parameters than observations, typically constraints are employed to arrive at a solution which spatially smooths the solution. If ignored, it can be concluded from the hypothesis test that there is activity at some location where there is none. In addition, an SPM on distributed source models gives the illusion of very high resolution. As an alternative, a multivariate approach is suggested in which a region of interest is tested that is spatially smooth. In simulations with MEG and EEG it is shown that clear hypothesis testing in distributed source models is possible, provided that there is high correspondence between what is intended to be tested and what is actually tested. The approach is also illustrated by an application to data from an experiment measuring visual evoked fields when presenting checkerboard patterns. Hum Brain Mapp, 2005. © 2005 Wiley-Liss, Inc. [source] A box scheme for transcritical flowINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2002T. C. Johnson Abstract The accurate computer simulation of river and pipe flow is of great importance in the design of urban drainage networks. The use of implicit numerical schemes allows the time step to be chosen on the basis of accuracy rather than stability, offering a potential computational saving over explicit methods. The highly successful Box Scheme is an implicit method which can be used to model a wide range of subcritical and supercritical flows. However, care must be taken over the modelling of transcritical flows since, unless the correct internal boundary conditions are imposed, the scheme becomes unstable. The necessity of accurately tracking all the critical interfaces and treating them accordingly can be algorithmically complex and in practice the underlying mathematical model is often modified to ensure that the flow remains essentially subcritical. Such a modification however inevitably leads to additional errors and incorrect qualitative behaviour can be observed. In this paper we show how the technique of ,residual distribution' can be successfully implemented in order to accurately model unsteady transcritical flow without the need to know a priori which regions of the computational domain correspond to subcritical and supercritical flow. When used in conjunction with a form of artificial smoothing, the resulting method generates very high resolution results even for transcritical problems involving shocks, as can be seen in the numerical results. Copyright © 2002 John Wiley & Sons, Ltd. [source] Evaluation of double-crystal SANS data influenced by multiple scatteringJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3-1 2000aroun Evaluation of small-angle neutron scattering (SANS) data is often complicated by multiple scattering effects if large particles of relatively high volume fraction have to be studied and dilution or contrast reduction is impossible. The use of pin-hole SANS instruments is often limited due to the contradictory requirements of high resolution and short wavelength needed to keep scattering contrast as low as possible. Double crystal (DC) SANS diffractometers of Bonse-Hart and bent-crystal type are useful alternatives in such cases, as they permit reaching very high resolution with thermal neutrons. A method for SANS data evaluation suited to DC instruments is presented. It includes the common scheme of the indirect Fourier transformation method, but takes multiple scattering into account. The scattering medium is described by the frequency function g(x) defined as the cosine Fourier transform of slit-smeared data. Although a simplistic model of polydisperse spheres is used to represent g(x), resulting g(x) function and some integral parameters are independent of this model. Tests on simulated data show, that the method reproduce well true values of microstructural parameters, though systematic errors are observed in the cases when the unscattered part of incident beam completely disappears. If the scattering power Lipid bilayers: an essential environment for the understanding of membrane proteinsMAGNETIC RESONANCE IN CHEMISTRY, Issue S1 2007Richard C. Page Abstract Membrane protein structure and function is critically dependent on the surrounding environment. Consequently, utilizing a membrane mimetic that adequately models the native membrane environment is essential. A range of membrane mimetics are available but none generates a better model of native aqueous, interfacial, and hydrocarbon core environments than synthetic lipid bilayers. Transmembrane ,-helices are very stable in lipid bilayers because of the low water content and low dielectric environment within the bilayer hydrocarbon core that strengthens intrahelical hydrogen bonds and hinders structural rearrangements within the transmembrane helices. Recent evidence from solid-state NMR spectroscopy illustrates that transmembrane ,-helices, both in peptides and full-length proteins, appear to be highly uniform based on the observation of resonance patterns in PISEMA spectra. Here, we quantitate for the first time through simulations what we mean by highly uniform structures. Indeed, helices in transmembrane peptides appear to have backbone torsion angles that are uniform within ± 4° . While individual helices can be structurally stable due to intrahelical hydrogen bonds, interhelical interactions within helical bundles can be weak and nonspecific, resulting in multiple packing arrangements. Some helical bundles have the capacity through their amino acid composition for hydrogen bonding and electrostatic interactions to stabilize the interhelical conformations and solid-state NMR data is shown here for both of these situations. Solid-state NMR spectroscopy is unique among the techniques capable of determining three-dimensional structures of proteins in that it provides the ability to characterize structurally the membrane proteins at very high resolution in liquid crystalline lipid bilayers. Copyright © 2007 John Wiley & Sons, Ltd. [source] Numerical prediction of severe convection: comparison with operational forecastsMETEOROLOGICAL APPLICATIONS, Issue 1 2003Milton S. Speer The prediction of severe convection is a major forecasting problem in Australia during the summer months. In particular, severe convection in the Sydney basin frequently produces heavy rain or hail, flash flooding, and destructive winds. Convective activity is a forecasting challenge for the Sydney basin, mainly from October to April. Currently, there is a need for improved numerical model guidance to supplement the official probabilistic convective outlooks, issued by the operational forecasters. In this study we assess the performance of a very high resolution (2 km) numerical weather prediction (NWP) model in terms of how well it performed in providing guidance on heavy rainfall and hail, as well as other mesoscale features such as low level convergence lines. Two cases are described in which the operational forecasts were incorrect on both occasions. Non-severe thunderstorms were predicted on 1 December 2000 but severe convection occurred. Severe convection was predicted on 8 December 2000, but no convection was reported. In contrast, the numerical model performed well, accurately predicting severe convection on 1 December and no convection on 8 December. These results have encouraged a program aimed at providing an enhanced numerical modelling capability to the operational forecasters for the Sydney basin. Copyright © 2003 Royal Meteorological Society [source] Processing of soft pupae and uneclosed pharate adults of Drosophila for scanning electron microscopyMICROSCOPY RESEARCH AND TECHNIQUE, Issue 12 2007Milan Be Abstract For over four decades, scanning electron microscopy (SEM) has been used in research involving Drosophila genetics and developmental biology. It allows for observation and documentation of the gross morphology of exoskeletal structures as well as their characterization at very high resolution. In most cases, SEM in Drosophila has been limited to imaging adult heads, thoraces, appendages, and embryos, as these structures are relatively hard and/or easy to process for SEM. In contrast, the structures of the pharate adult stages are difficult to prepare for SEM because their integument is quite soft, they are extremely dirty and they are resistant to the usual processing methods. Here, we present an innovative method to prepare these types of structures. Our protocol efficiently removes extraneous material originating from the exuvial fluid of pharate adults and uses a hydrophobic expansion step to keep the soft exoskeleton of the body inflated. In addition to using immersion fixation, it utilizes fixation within the body that occurs via a reaction between osmium tetroxide and alcohols that are infiltrated into the body during a hydrophobic expansion step. This novel approach results in a properly inflated integument that retains its shape in subsequent procedures. Our method provides a useful, general alternative for processing difficult samples, including soft, biological "whole-mount" specimens and samples that are extremely dirty or resistant to fixative penetration. Microsc. Res. Tech., 2007. © 2007 Wiley-Liss, Inc. [source] Crystallographically oriented high resolution lithography of graphene nanoribbons by STM lithographyPHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 4 2010G. Dobrik Abstract Due to its exciting physical properties and sheet-like geometry graphene is in the focus of attention both from the point of view of basic science and of potential applications. In order to fully exploit the advantage of the sheet-like geometry very high resolution, crystallographicaly controlled lithography has to be used. Graphene is a zero gap semiconductor, so that a field effect transistor (FET) will not have an "off" state unless a forbidden gap is created. Such a gap can be produced confining the electronic wave functions by etching narrow graphene nanoribbons (GNRs) typically of a few nanometers in width and with well defined crystallographic orientation. We developed the first lithographic method able to achieve GNRs that have both nanometer widths and well defined crystallographic orientation. The lithographic process is carried out by the local oxidation of the sample surface under the tip of a scanning tunneling microscopy (STM). Crystallographic orientation is defined by acquiring atomic resolution images of the surface to be patterned. The cutting of trenches with controlled depth and of a few nanometer in width, folding and manipulation of single graphene layers is demonstrated. The narrowest GNR cut by our method is of 2.5,nm width, scanning tunneling spectroscopy (STS) showed that it has a gap of 0.5,eV, comparable to that of germanium, which allows room temperature operation of graphene nanodevices. [source] Atomic resolution structure of Escherichia coli dUTPase determined ab initioACTA CRYSTALLOGRAPHICA SECTION D, Issue 6 2001A. González Cryocooled crystals of a mercury complex of Escherichia coli dUTPase diffract to atomic resolution. Data to 1.05,Ĺ resolution were collected from a derivative crystal and the structure model was derived from a Fourier map with phases calculated from the coordinates of the Hg atom (one site per subunit of the trimeric enzyme) using the program ARP/wARP. After refinement with anisotropic temperature factors a highly accurate model of the bacterial dUTPase was obtained. Data to 1.45,Ĺ from a native crystal were also collected and the 100,K structures were compared. Inspection of the refined models reveals that a large part of the dUTPase remains rather mobile upon freezing, with 14% of the main chain being totally disordered and with numerous side chains containing disordered atoms in multiple discrete conformations. A large number of those residues surround the active-site cavity. Two glycerol molecules (the cryosolvent) occupy the deoxyribose-binding site. Comparison between the native enzyme and the mercury complex shows that the active site is not adversely affected by the binding of mercury. An unexpected effect seems to be a stabilization of the crystal lattice by means of long-range interactions, making derivatization a potentially useful tool for further studies of inhibitor,substrate-analogue complexes of this protein at very high resolution. [source] Statistical Metrics for Quality Assessment of High-Density Tiling Array DataBIOMETRICS, Issue 2 2010Hui Tang Summary High-density tiling arrays are designed to blanket an entire genomic region of interest using tiled oligonucleotides at very high resolution and are widely used in various biological applications. Experiments are usually conducted in multiple stages, in which unwanted technical variations may be introduced. As tiling arrays become more popular and are adopted by many research labs, it is pressing to develop quality control tools as was done for expression microarrays. We propose a set of statistical quality metrics analogous to those in expression microarrays with application to tiling array data. We also develop a method to estimate the significance level of an observed quality measurement using randomization tests. These methods have been applied to multiple real data sets, including three independent ChIP-chip experiments and one transcriptom mapping study, and they have successfully identified good quality chips as well as outliers in each study. [source]
| |