Kernel

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of Kernel

  • corn kernel
  • dispersal kernel
  • heat kernel
  • maize kernel
  • pricing kernel
  • rice kernel
  • seed dispersal kernel
  • singular kernel
  • whole kernel

  • Terms modified by Kernel

  • kernel density estimator
  • kernel estimate
  • kernel estimation
  • kernel estimator
  • kernel function
  • kernel meal
  • kernel particle method
  • kernel size
  • kernel smoothing
  • kernel weight

  • Selected Abstracts


    SEASONAL VARIATIONS IN FATTY ACID COMPOSITION OF OIL IN DEVELOPING COCONUT

    JOURNAL OF FOOD QUALITY, Issue 2 2009
    S. NARESH KUMAR
    ABSTRACT Studies on seasonal variation in oil and fatty acid profile of developing solid endosperm of two cultivars, West Coast Tall (WCT) and Chowghat Orange Dwarf (COD), and their hybrids indicated that oil percentage increased from 30% in 6-month-old nuts to 63% in matured nuts (12 months old). Nuts sampled during July from different levels of maturity had high oil percentage and followed by those sampled during April, October and January. During nut development to maturity, the percentages and contents of medium and long chain saturated fatty acids increased except that of palmitic and myristic acids. Concentration of long chain unsaturated fatty acids (LCUFAs) in developing coconut kernel were high at 5 and 6 months after fertilization and then decreased toward maturity. The LCUFAs were high in nuts developing during October; consequently, saturated to unsaturated fatty acid ratios were low during October. Results indicated that nuts matured during October had better nutritional quality for human consumption and those matured during January are more suitable for industrial purpose due to higher medium chain fatty acid concentrations. PRACTICAL APPLICATIONS Coconut is consumed either as the tender nut (5,6 months after fertilization) or as the kernel from mature nut (12 months after fertilization). Recent technologies of making snowball tender nut use the nuts aged 7,8 months old. Kernel also is consumed in this product. Apart from this, the coconut is being increasingly used for making different kernel-based value-added products. This information is useful, as the value-added products are being developed using different maturities of coconut. Hence, it is of paramount importance that the fatty acid profile of coconut kernel is known in detail for assessing the safety of food consumption from the human health point of view. Apart from this, information on the seasonal variation in fatty acid profile of developing endosperm gives an integrated knowledge so as to optimize the usage of coconut kernel for both human consumption and industrial exploitation. [source]


    Three-Dimensional Lipid Distribution of a Brown Rice Kernel

    JOURNAL OF FOOD SCIENCE, Issue 7 2002
    Y. Ogawa
    ABSTRACT: Lipid distribution was successfully observed in a brown rice kernel (Oryza sativa L.) 3-dimension-ally (3D) by means of a virtual 3D visualizing model. Sections of an untreated rice kernel were collected on an adhesive tape with preservation of its shape. The actual distribution of lipid was visualized by staining. A virtual 3D visualizing model of the lipid distribution was produced from the stained sequential sections of the rice kernel. Lipid is not only located at the outer layer of the rice kernel but also in lower tissues beneath the seed coat and around the embryo. Lipid distribution at dorsal and ventral sides could also be visualized. [source]


    Antifungal Activity of a Bowman,Birk-type Trypsin Inhibitor from Wheat Kernel

    JOURNAL OF PHYTOPATHOLOGY, Issue 7-8 2000
    G. Chilosi
    A trypsin inhibitor from wheat kernel (WTI) was found to have a strong antifungal activity against a number of pathogenic fungi and to inhibit fungal trypsin-like activity. WTI inhibited in vitro spore germination and hyphal growth of pathogens, with protein concentration required for 50% growth inhibition (IC50) values ranging from 111.7 to above 500 ,g/ml. As observed by electron microscopy, WTI determined morphological alterations represented by hyphal growth inhibition and branching. One of the fungal species tested, Botrytis cinerea produced a trypsin-like protease, which was inhibited by the trypsin inhibitor. WTI, as well as other seed defence proteins, appear to be an important resistance factor in wheat kernels during rest and early germination when plants are particularly exposed to attack by potential soil-borne pathogens. Zusammenfassung Ein Trypsinhemmer aus Weizenkörnern (WTI) zeigte eine starke antifungale Aktivität gegenüber verschiedenen pathogenen Pilzen und hemmte deren trypsinähnliche Aktivität. WTI hemmte in vitro die Sporenkeimung und das Hyphenwachstum der Pathogene, wobei die IC50 -Werte zwischen 111,7 und mehr als 500 ,g/ml lagen. Elektronenmikroskopische Untersuchungen zeigten, dai WTI morphologische Veränderungen bewirkte, die aus einer Hemmung des Hyphenwachstums und einer veränderten Verzweigung bestanden. Eine der untersuchten Pilzarten, Botrytis cinerea, bildete eine trypsinähnliche Protease, die durch den Trypsininhibitor gehemmt wurde. Ebenso wie andere sameneigene Abwehrproteine scheint WTI während der Keimruhe und in den frühen Stadien der Keimung, wenn die Pflanzen gegenüber möglichen bodenbürtigen Pathogenen besonders exponiert sind, ein wichtiger Resistenzfaktor in Weizenkörnern zu sein. [source]


    Opalescence in Australian-grown Pecan Kernels: Occurrence and Causes

    JOURNAL OF FOOD SCIENCE, Issue 8 2002
    L.T. Wakeling
    ABSTRACT: Opalescence is an unattractive browning of the interior of the pecan kernel compared to the white interior of normal kernels. The discoloration is due to the presence of free oil, resulting from decompartmentalization in the endosperm of opalescent pecans. Using a subjective scoring system, approximately 70% of Australian-grown pecan kernels tested were found to exhibit opalescence to some degree. Evaluation of kernels for opalescence during the harvesting-processing chain showed that opalescence first becomes evident in kernels after mechanical cracking. Opalescent kernels were found to have lower levels of calcium and higher amounts of oil compared to nonopalescent kernels. Differential scanning calorimetry showed that kernels do not freeze at -18 °C. [source]


    Dropping macadamia nuts-in-shell reduces kernel roasting quality

    JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 13 2010
    David A Walton
    Abstract BACKGROUND: Macadamia nuts (,nuts-in-shell') are subjected to many impacts from dropping during postharvest handling, resulting in damage to the raw kernel. The effect of dropping on roasted kernel quality is unknown. Macadamia nuts-in-shell were dropped in various combinations of moisture content, number of drops and receiving surface in three experiments. After dropping, samples from each treatment and undropped controls were dry oven-roasted for 20 min at 130 °C, and kernels were assessed for colour, mottled colour and surface damage. RESULTS: Dropping nuts-in-shell onto a bed of nuts-in-shell at 3% moisture content or 20% moisture content increased the percentage of dark roasted kernels. Kernels from nuts dropped first at 20%, then 10% moisture content, onto a metal plate had increased mottled colour. Dropping nuts-in-shell at 3% moisture content onto nuts-in-shell significantly increased surface damage. Similarly, surface damage increased for kernels dropped onto a metal plate at 20%, then at 10% moisture content. CONCLUSION: Postharvest dropping of macadamia nuts-in-shell causes concealed cellular damage to kernels, the effects not evident until roasting. This damage provides the reagents needed for non-enzymatic browning reactions. Improvements in handling, such as reducing the number of drops and improving handling equipment, will reduce cellular damage and after-roast darkening. Copyright © 2010 Society of Chemical Industry [source]


    Descriptive sensory analysis of light, medium, and dark colored kernels of black walnut cultivars

    JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 11 2009
    Michele R Warmund
    Abstract BACKGROUND: Kernels of black walnut trees (Juglans nigra L.) are characterized by their unique fruity, woody, musty and sweet flavors. While most of the crop is produced from native trees, an increasing volume of kernels is harvested from grafted trees of improved cultivars. Cultivars produce nuts with desirable cracking qualities and have larger kernel size than those of native trees. However, kernel color of black walnuts can be variable due to time of harvest and hulling. The objective of this study was to evaluate flavor attributes of light, medium and dark colored kernels of Emma K, Kwik Krop, Sparks 127 and wild black walnut trees. RESULTS: Eighteen flavor terms were used for descriptive analysis of walnut kernels. Floral/fruity and sweet flavors varied among wild and Sparks 127 kernels. Various flavor characteristics were affected by kernel color. Dark colored kernels had more intense burnt, musty/dusty, oily, woody, astringent, and sour flavors than light colored kernels. CONCLUSION: This study demonstrated that eliminating dark colored Emma K and Sparks 127 kernels by color sorting will likely limit acrid, rancid and bitter flavors in the marketplace, which may be perceived as unappealing by consumers. Light colored kernels are produced by shaking trees early in the harvest season and hulling fruits immediately after harvest. Copyright © 2009 Society of Chemical Industry [source]


    Implicit Surface Modelling with a Globally Regularised Basis of Compact Support

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    C. Walder
    We consider the problem of constructing a globally smooth analytic function that represents a surface implicitly by way of its zero set, given sample points with surface normal vectors. The contributions of the paper include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable interpolation properties previously only associated with fully supported bases. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem lying at the core of kernel-based machine learning methods. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data and four-dimensional interpolation between three-dimensional shapes. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Curve, surface, solid, and object representations [source]


    From a Product Model to Visualization: Simulation of Indoor Flows with Lattice-Boltzmann Methods

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2004
    Siegfried Kühner
    All models are derived from a product data model based on Industry Foundation Classes. Concepts of the Lattice-Boltzmann method are described, being used as the numerical kernel of our simulation system. We take advantage of spacetrees as a central data structure for all geometry related objects. Finally, we describe some advanced postprocessing and visualization techniques allowing to efficiently analyze huge amounts of simulation data. [source]


    Heterogeneous Plasma-Producing Structures at Current Implosion of a Wire Array

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 8 2005
    E. V. Grabovsky
    Abstract Characteristic properties of the plasma production process have been considered for the case of megampere currents flowing through hollow cylindrical wire arrays of the Angara-5-1 facility. In 3-4 nanoseconds after voltage applying to the wire surfaces there appear a plasma layer. The system becomes heterogeneous, i.e. consisting of a kernel of metal wires and a plasma layer. In several nanoseconds the current flow goes from metal to plasma, which results in reducing the electric field strength along the wire. The Joule heat energy delivered to the metal before the moment of complete current trapping by plasma is insufficient for the whole mass transition to a hot plasma state. The X-ray radiography techniques made it possible to detect and study dense clusters of substance of ,1g/cm3 at a developed discharge stage. The radial expansion velocity of ,104 cm/s measured at the 70-th nanosecond after the current start allows treating the dense core at a late stage in the form of a submicron heterogeneous structure from its liquid and slightly ionized gas phase. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Crystallization kinetics of ZnS precipitation; an experimental study using the mixed-suspension-mixed-product-removal (MSMPR) method

    CRYSTAL RESEARCH AND TECHNOLOGY, Issue 8 2004
    Mousa Al-Tarazi
    Abstract The precipitation kinetics of zinc sulfide were studied using a lab scale mixed-suspension-mixed-product-removal (MSMPR) precipitation reactor. The vessel was operated at different feed concentrations, molar ratios, stirrer speeds, pH-values, feed injection positions and residence times. Primary nucleation and volume average crystal growth rates as well as agglomeration kernel were determined. Relationships were found between the rates of the different crystallization steps on the one hand and supersaturation, stirrer speeds, pH-values, Zn2+ to S2- ratio, feed positions on the other. These show that larger crystals are obtained at high supersaturation, moderate stirrer speeds, small residence times, a pH-value of around 5 and high Zn2+ to S2- ratios. One should realize though that the applied MSMPR method is not the most optimal technique for examining fast precipitation reactions. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Stability and identification for rational approximation of frequency response function of unbounded soil

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 2 2010
    Xiuli Du
    Abstract Exact representation of unbounded soil contains the single output,single input relationship between force and displacement in the physical or transformed space. This relationship is a global convolution integral in the time domain. Rational approximation to its frequency response function (frequency-domain convolution kernel) in the frequency domain, which is then realized into the time domain as a lumped-parameter model or recursive formula, is an effective method to obtain the temporally local representation of unbounded soil. Stability and identification for the rational approximation are studied in this paper. A necessary and sufficient stability condition is presented based on the stability theory of linear system. A parameter identification method is further developed by directly solving a nonlinear least-squares fitting problem using the hybrid genetic-simplex optimization algorithm, in which the proposed stability condition as constraint is enforced by the penalty function method. The stability is thus guaranteed a priori. The infrequent and undesirable resonance phenomenon in stable system is also discussed. The proposed stability condition and identification method are verified by several dynamic soil,structure-interaction examples. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Linking dispersal, immigration and scale in the neutral theory of biodiversity

    ECOLOGY LETTERS, Issue 12 2009
    Ryan A. Chisholm
    Abstract In the classic spatially implicit formulation of Hubbell's neutral theory of biodiversity a local community receives immigrants from a metacommunity operating on a relatively slow timescale, and dispersal into the local community is governed by an immigration parameter m. A current problem with neutral theory is that m lacks a clear biological interpretation. Here, we derive analytical expressions that relate the immigration parameter m to the geometry of the plot defining the local community and the parameters of a dispersal kernel. Our results facilitate more rigorous and extensive tests of the neutral theory: we conduct a test of neutral theory by comparing estimates of m derived from fits to empirical species abundance distributions to those derived from dispersal kernels and find acceptable correspondence; and we generate a new prediction of neutral theory by investigating how the shapes of species abundance distributions change theoretically as the spatial scale of observation changes. We also discuss how our main analytical results can be used to assess the error in the mean-field approximations associated with spatially implicit formulations of neutral theory. Ecology Letters (2009) 12: 1385,1393 [source]


    Efficiency, Equilibrium, and Asset Pricing with Risk of Default

    ECONOMETRICA, Issue 4 2000
    Fernando Alvarez
    We introduce a new equilibrium concept and study its efficiency and asset pricing implications for the environment analyzed by Kehoe and Levine (1993) and Kocherlakota (1996). Our equilibrium concept has complete markets and endogenous solvency constraints. These solvency constraints prevent default at the cost of reducing risk sharing. We show versions of the welfare theorems. We characterize the preferences and endowments that lead to equilibria with incomplete risk sharing. We compare the resulting pricing kernel with the one for economies without participation constraints: interest rates are lower and risk premia depend on the covariance of the idiosyncratic and aggregate shocks. Additionally, we show that asset prices depend only on the valuation of agents with substantial idiosyncratic risk. [source]


    Optimal Nonparametric Estimation of First-price Auctions

    ECONOMETRICA, Issue 3 2000
    Emmanuel Guerre
    This paper proposes a general approach and a computationally convenient estimation procedure for the structural analysis of auction data. Considering first-price sealed-bid auction models within the independent private value paradigm, we show that the underlying distribution of bidders' private values is identified from observed bids and the number of actual bidders without any parametric assumptions. Using the theory of minimax, we establish the best rate of uniform convergence at which the latent density of private values can be estimated nonparametrically from available data. We then propose a two-step kernel-based estimator that converges at the optimal rate. [source]


    Kernel estimates of hazard functions for carcinoma data sets

    ENVIRONMETRICS, Issue 3 2006
    Ivana Horová
    Abstract The present article focuses on kernel estimates of hazard functions and their derivatives. Our approach is based on the model introduced by Müller and Wang (1990). In order to estimate the hazard function in an effective manner an automatic procedure in a paper by Horová et al. (2002) is applied. The procedure chooses a bandwidth, a kernel and an order of a kernel. As a by-product we propose a special procedure for the estimation of the optimal bandwidth. This is applied to the carcinoma data sets kindly provided by the Masaryk Memorial Cancer Institute in Brno. Attention is also paid to the points of the most rapid change of the hazard function. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Origin of the earliest correlated neuronal activity in the chick embryo revealed by optical imaging with voltage-sensitive dyes

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 1 2009
    Yoko Momose-Sato
    Abstract Spontaneous correlated neuronal activity during early development spreads like a wave by recruiting a large number of neurons, and is considered to play a fundamental role in neural development. One important and as yet unresolved question is where the activity originates, especially at the earliest stage of wave expression. In other words, which part of the brain differentiates first as a source of the correlated activity, and how does it change as development proceeds? We assessed this issue by examining the spatiotemporal patterns of the depolarization wave, the optically identified primordial correlated activity, using the optical imaging technique with voltage-sensitive dyes. We surveyed the region responsible for the induction of the evoked and spontaneous depolarization waves in chick embryos, and traced its developmental changes. The results showed that the wave initially originated in a restricted area near the obex and was generated by multiple regions at later stages. We suggest that the upper cervical cord/lower medulla near the obex is the kernel that differentiates first as the source of the correlated activity, and that regional and temporal differences in neuronal excitability might underlie the developmental profile of wave generation in early chick embryos. [source]


    To breathe or not to breathe?

    EXPERIMENTAL PHYSIOLOGY, Issue 1 2009
    That is the question
    Our understanding of the role of the brain in respiratory rhythm generation and regulation began the early nineteenth century. Over the next 150 years the neuronal groups in the medulla oblongata and pons that were involved in eupnoea and in gasping were identified by techniques involving the lesioning of areas of the lower brainstem, several transections across the brainstem and focal electrical stimulation. An incomplete picture emerged that stressed the importance of the ventral medulla. Subsequent electrophysiological studies in in vivo, in situ and in vitro preparations have revealed the importance of restricted groups of neurones in this area, within the Bötzinger and pre-Bötzinger nuclei, that are the essential kernel for rhythm generation. The outputs to the spinal motoneurones responsible for the patterning of inspiratory and expiratory discharge are shaped by inputs from these neurones and others within the respiratory complex that determine the activity of respiratory bulbospinal neurones. It is clear that the developmental stage of the preparation is often critical for the pattern of respiratory activity that is generated and that these patterns have important physiological consequences. The models that are currently considered to explain rhythmogenesis are critically evaluated. The respiratory network is subject to regulation from peripheral and central chemoreceptors, amongst other afferent inputs, which act to ensure respiratory homeostasis. The roles of peripheral chemoreceptors as primarily O2 sensors are considered, and the evolution of ideas surrounding their roles is described. New insights into the transduction mechanisms of chemoreception in the carotid body and chemosensitive areas of the ventral medullary surface, specifically in monitoring CO2 levels, are reviewed. As new experimental tools, both genetic and cellular, are emerging, it can be expected that the detailed network architecture and synaptic interactions that pattern respiratory activity in relation to behavioural activity will be revealed over the next years. [source]


    Automatic tuning of L2 -SVM parameters employing the extended Kalman filter

    EXPERT SYSTEMS, Issue 2 2009
    Tingting Mu
    Abstract: We show that tuning of multiple parameters for a 2-norm support vector machine (L2 -SVM) could be viewed as an identification problem of a nonlinear dynamic system. Benefiting from the reachable smooth nonlinearity of an L2 -SVM, we propose to employ the extended Kalman filter to tune the kernel and regularization parameters automatically for the L2 -SVM. The proposed method is validated using three public benchmark data sets and compared with the gradient descent approach as well as the genetic algorithm in measures of classification accuracy and computing time. Experimental results demonstrate the effectiveness of the proposed method in higher classification accuracies, faster training speed and less sensitivity to the initial settings. [source]


    Ricci flows and infinite dimensional algebras

    FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 6-7 2004
    I. Bakas
    The renormalization group equations of two-dimensional sigma models describe geometric deformations of their target space when the world-sheet length changes scale from the ultra-violet to the infra-red. These equations, which are also known in the mathematics literature as Ricci flows, are analyzed for the particular case of two-dimensional target spaces, where they are found to admit a systematic description as Toda system. Their zero curvature formulation is made possible with the aid of a novel infinite dimensional Lie algebra, which has anti-symmetric Cartan kernel and exhibits exponential growth. The general solution is obtained in closed form using Bäcklund transformations, and special examples include the sausage model and the decay process of conical singularities to the plane. Thus, Ricci flows provide a non-linear generalization of the heat equation in two dimensions with the same dissipative properties. Various applications to dynamical problems of string theory are also briefly discussed. Finally, we outline generalizations to higher dimensional target spaces that exhibit sufficient number of Killing symmetries. [source]


    Association tests using kernel-based measures of multi-locus genotype similarity between individuals

    GENETIC EPIDEMIOLOGY, Issue 3 2010
    Indranil Mukhopadhyay
    Abstract In a genetic association study, it is often desirable to perform an overall test of whether any or all single-nucleotide polymorphisms (SNPs) in a gene are associated with a phenotype. Several such tests exist, but most of them are powerful only under very specific assumptions about the genetic effects of the individual SNPs. In addition, some of the existing tests assume that the direction of the effect of each SNP is known, which is a highly unlikely scenario. Here, we propose a new kernel-based association test of joint association of several SNPs. Our test is non-parametric and robust, and does not make any assumption about the directions of individual SNP effects. It can be used to test multiple correlated SNPs within a gene and can also be used to test independent SNPs or genes in a biological pathway. Our test uses an analysis of variance paradigm to compare variation between cases and controls to the variation within the groups. The variation is measured using kernel functions for each marker, and then a composite statistic is constructed to combine the markers into a single test. We present simulation results comparing our statistic to the U -statistic-based method by Schaid et al. ([2005] Am. J. Hum. Genet. 76:780,793) and another statistic by Wessel and Schork ([2006] Am. J. Hum. Genet. 79:792,806). We consider a variety of different disease models and assumptions about how many SNPs within the gene are actually associated with disease. Our results indicate that our statistic has higher power than other statistics under most realistic conditions. Genet. Epidemiol. 34: 213,221, 2010. © 2009 Wiley-Liss, Inc. [source]


    A correlation-based misfit criterion for wave-equation traveltime tomography

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
    T. Van Leeuwen
    SUMMARY Wave-equation traveltime tomography tries to obtain a subsurface velocity model from seismic data, either passive or active, that explains their traveltimes. A key step is the extraction of traveltime differences, or relative phase shifts, between observed and modelled finite-frequency waveforms. A standard approach involves a correlation of the observed and measured waveforms. When the amplitude spectra of the waveforms are identical, the maximum of the correlation is indicative of the relative phase shift. When the amplitude spectra are not identical, however, this argument is no longer valid. We propose an alternative criterion to measure the relative phase shift. This misfit criterion is a weighted norm of the correlation and is less sensitive to differences in the amplitude spectra. For practical application it is important to use a sensitivity kernel that is consistent with the way the misfit is measured. We derive this sensitivity kernel and show how it differs from the standard banana,doughnut sensitivity kernel. We illustrate the approach on a cross-well data set. [source]


    On establishing the accuracy of noise tomography travel-time measurements in a realistic medium

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2009
    Victor C. Tsai
    SUMMARY It has previously been shown that the Green's function between two receivers can be retrieved by cross-correlating time series of noise recorded at the two receivers. This property has been derived assuming that the energy in normal modes is uncorrelated and perfectly equipartitioned, or that the distribution of noise sources is uniform in space and the waves measured satisfy a high frequency approximation. Although a number of authors have successfully extracted travel-time information from seismic surface-wave noise, the reason for this success of noise tomography remains unclear since the assumptions inherent in previous derivations do not hold for dispersive surface waves on the Earth. Here, we present a simple ray-theory derivation that facilitates an understanding of how cross correlations of seismic noise can be used to make direct travel-time measurements, even if the conditions assumed by previous derivations do not hold. Our new framework allows us to verify that cross-correlation measurements of isotropic surface-wave noise give results in accord with ray-theory expectations, but that if noise sources have an anisotropic distribution or if the velocity structure is non-uniform then significant differences can sometimes exist. We quantify the degree to which the sensitivity kernel is different from the geometric ray and find, for example, that the kernel width is period-dependent and that the kernel generally has non-zero sensitivity away from the geometric ray, even within our ray theoretical framework. These differences lead to usually small (but sometimes large) biases in models of seismic-wave speed and we show how our theoretical framework can be used to calculate the appropriate corrections. Even when these corrections are small, calculating the errors within a theoretical framework would alleviate fears traditional seismologists may have regarding the robustness of seismic noise tomography. [source]


    Surface deformation due to loading of a layered elastic half-space: a rapid numerical kernel based on a circular loading element

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
    E. Pan
    SUMMARY This study is motivated by a desire to develop a fast numerical algorithm for computing the surface deformation field induced by surface pressure loading on a layered, isotropic, elastic half-space. The approach that we pursue here is based on a circular loading element. That is, an arbitrary surface pressure field applied within a finite surface domain will be represented by a large number of circular loading elements, all with the same radius, in which the applied downwards pressure (normal stress) is piecewise uniform: that is, the load within each individual circle is laterally uniform. The key practical requirement associated with this approach is that we need to be able to solve for the displacement field due to a single circular load, at very large numbers of points (or ,stations'), at very low computational cost. This elemental problem is axisymmetric, and so the displacement vector field consists of radial and vertical components both of which are functions only of the radial coordinate r. We achieve high computational speeds using a novel two-stage approach that we call the sparse evaluation and massive interpolation (SEMI) method. First, we use a high accuracy but computationally expensive method to compute the displacement vectors at a limited number of r values (called control points or knots), and then we use a variety of fast interpolation methods to determine the displacements at much larger numbers of intervening points. The accurate solutions achieved at the control points are framed in terms of cylindrical vector functions, Hankel transforms and propagator matrices. Adaptive Gauss quadrature is used to handle the oscillatory nature of the integrands in an optimal manner. To extend these exact solutions via interpolation we divide the r -axis into three zones, and employ a different interpolation algorithm in each zone. The magnitude of the errors associated with the interpolation is controlled by the number, M, of control points. For M= 54, the maximum RMS relative error associated with the SEMI method is less than 0.2 per cent, and it is possible to evaluate the displacement field at 100 000 stations about 1200 times faster than if the direct (exact) solution was evaluated at each station; for M= 99 which corresponds to a maximum RMS relative error less than 0.03 per cent, the SEMI method is about 700 times faster than the direct solution. [source]


    On strike-slip faulting in layered media

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2002
    Maurizio Bonafede
    Summary We study the effects of structural inhomogeneities on the stress and displacement fields induced by strike-slip faults in layered media. An elastic medium is considered, made up of an upper layer bounded by a free surface and welded to a lower half-space characterized by different elastic parameters. Shear cracks with assigned stress drop are employed as mathematical models of strike-slip faults, which are assumed to be vertical and planar. If the crack is entirely embedded within the lower medium (case A), a Cauchy-kernel integral equation is obtained, which is solved by employing an expansion of the dislocation density in Chebyshev polynomials. If the crack is within the lower medium but it terminates at the interface (case B), a generalized Cauchy singularity appears in the integral kernel. This singularity affects the singular behaviour of the dislocation density at the crack tip touching the interface. Finally, the case of a crack crossing the interface is considered (case C). The crack is split into two interacting sections, each placed in a homogeneous medium and both open at the interface. Two coupled generalized Cauchy equations are obtained and solved for the dislocation density distribution of each crack section. An asymptotic study near the intersection between the crack and the interface shows that the dislocation densities for each crack section are bounded at the interface, where a jump discontinuity is present. As a corollary, the stress drop must be discontinuous at the interface, with a jump proportional to the rigidity contrast between the adjoining media. This finding is shown to have important implications for the development of geometrical complexities within transform fault zones: planar strike-slip faults cutting across layer discontinuities with arbitrary stress drop values are shown to be admissible only if the interface between different layers becomes unwelded during the earthquake at the crack/interface junction. Planar strike-slip faulting may take place only in mature transform zones, where a repetitive earthquake cycle has already developed, if the rheology is perfectly elastic. Otherwise, the fault cannot be planar: we infer that strike-slip faulting at depth is plausibly accompanied by en-echelon surface breaks in a shallow sedimentary layer (where the stress drop is lower than prescribed by the discontinuity condition), while ductile deformation (or steady sliding) at depth may be accommodated by multiple fault branching or by antithetic faulting in the upper brittle layer (endowed with lower rigidity but higher stress). [source]


    Wavefront healing: a banana,doughnut perspective

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2001
    S.-H. Hung
    SUMMARY Wavefront healing is a ubiquitous diffraction phenomenon that affects cross-correlation traveltime measurements, whenever the scale of the 3-D variations in wave speed is comparable to the characteristic wavelength of the waves. We conduct a theoretical and numerical analysis of this finite-frequency phenomenon, using a 3-D pseudospectral code to compute and measure synthetic pressure-response waveforms and ,ground truth' cross-correlation traveltimes at various distances behind a smooth, spherical anomaly in an otherwise homogeneous acoustic medium. Wavefront healing is ignored in traveltime tomographic inversions based upon linearized geometrical ray theory, in as much as it is strictly an infinite-frequency approximation. In contrast, a 3-D banana,doughnut Fréchet kernel does account for wavefront healing because it is cored by a tubular region of negligible traveltime sensitivity along the source,receiver geometrical ray. The cross-path width of the 3-D kernel varies as the square root of the wavelength , times the source,receiver distance L, so that as a wave propagates, an anomaly at a fixed location finds itself increasingly able to ,hide' within the growing doughnut ,hole'. The results of our numerical investigations indicate that banana,doughnut traveltime predictions are generally in excellent agreement with measured ground truth traveltimes over a wide range of propagation distances and anomaly dimensions and magnitudes. Linearized ray theory is, on the other hand, only valid for large 3-D anomalies that are smooth on the kernel width scale ,(, L). In detail, there is an asymmetry in the wavefront healing behaviour behind a fast and slow anomaly that cannot be adequately modelled by any theory that posits a linear relationship between the measured traveltime shift and the wave-speed perturbation. [source]


    Diffraction imaging in depth

    GEOPHYSICAL PROSPECTING, Issue 5 2008
    T.J. Moser
    ABSTRACT High resolution imaging is of great value to an interpreter, for instance to enable identification of small scale faults, and to locate formation pinch-out positions. Standard approaches to obtain high-resolution information, such as coherency analysis and structure-oriented filters, derive attributes from stacked, migrated images. Since they are image-driven, these techniques are sensitive to artifacts due to an inadequate migration velocity; in fact the attribute derivation is not based on the physics of wave propagation. Diffracted waves on the other hand have been recognized as physically reliable carriers of high- or even super-resolution structural information. However, high-resolution information, encoded in diffractions, is generally lost during the conventional processing sequence, indeed migration kernels in current migration algorithms are biased against diffractions. We propose here methods for a diffraction-based, data-oriented approach to image resolution. We also demonstrate the different behaviour of diffractions compared to specular reflections and how this can be leveraged to assess characteristics of subsurface features. In this way a rough surface such as a fault plane or unconformity may be distinguishable on a diffraction image and not on a traditional reflection image. We outline some characteristic properties of diffractions and diffraction imaging, and present two novel approaches to diffraction imaging in the depth domain. The first technique is based on reflection focusing in the depth domain and subsequent filtering of reflections from prestack data. The second technique modifies the migration kernel and consists of a reverse application of stationary-phase migration to suppress contributions from specular reflections to the diffraction image. Both techniques are proposed as a complement to conventional full-wave pre-stack depth migration, and both assume the existence of an accurate migration velocity. [source]


    Wavefield Migration plus Monte Carlo Imaging of 3D Prestack Seismic Data

    GEOPHYSICAL PROSPECTING, Issue 5 2006
    Ernesto Bonomi
    ABSTRACT Prestack wave-equation migration has proved to be a very accurate shot-by-shot imaging tool. However, 3D imaging with this technique of a large field acquisition, especially one with hundreds of thousands of shots, is prohibitively costly. Simply adapting the technique to migrate many superposed shot-gathers simultaneously would render 3D wavefield prestack migration cost-effective but it introduces uncontrolled non-physical interference among the shot-gathers, making the final image useless. However, it has been observed that multishot signal interference can be kept under some control by averaging over many such images, if each multishot migration is modified by a random phase encoding of the frequency spectra of the seismic traces. In this article, we analyse this technique, giving a theoretical basis for its observed behaviour: that the error of the image produced by averaging over M phase encoded migrations decreases as M,1. Furthermore, we expand the technique and define a general class of Monte-Carlo encoding methods for which the noise variance of the average imaging condition decreases as M,1; these methods thus all converge asymptotically to the correct reflectivity map, without generating prohibitive costs. The theoretical asymptotic behaviour is illustrated for three such methods on a 2D test case. Numerical verification in 3D is then presented for one such method implemented with a 3D PSPI extrapolation kernel for two test cases: the SEG,EAGE salt model and a real test constructed from field data. [source]


    The European Commission: The Limits of Centralization and the Perils of Parliamentarization

    GOVERNANCE, Issue 3 2002
    Giandomenico MajoneArticle first published online: 17 DEC 200
    The idea of an inevitable process of centralization in the European Community (EC)/European Union (EU) is a myth. Also, the metaphor of "creeping competences," with its suggestion of a surreptitious but continuous growth of the powers of the Commission, can be misleading. It is true that the functional scope of EC/EU competences has steadily increased, but the nature of new competences has changed dramatically, as may be seen from the evolution of the methods of harmonization. The original emphasis on total harmonization, which gives the Community exclusive competence over a given policy area, has been largely replaced by more flexible but less "communitarian" methods such as optional and minimum harmonization, reference to nonbinding technical standards, and mutual recognition. Finally, the treaties of Maastricht and Amsterdam explicitly excluded harmonization for most new competences. Thus, the expansion of the jurisdiction of the EC/EU has not automatically increased the powers of the Commission, but has actually weakened them in several respects. In addition, the progressive parliamentarization of the Commission risks compromising its credibility as an independent regulator, without necessarily enhancing its democratic legitimacy. Since the member states continue to oppose any centralization of regulatory powers, even in areas essential to the functioning of the internal market, the task of implementing Community policies should be entrusted to networks of independent national and European regulators, roughly modeled on the European System of Central Banks. The Commission would coordinate and monitor the activities of these networks in order to ensure the coherence of EC regulatory policies. More generally, it should bring its distinctive competence more clearly into focus by concentrating on the core business of ensuring the development and proper functioning of the single European market. This is a more modest role than that of the kernel of a future government of Europe, but it is essential to the credibility of the integration process and does not overstrain the limited financial and legitimacy resources available to the Commission. [source]


    Experimental evidence that deer browsing reduces habitat suitability for breeding Common Nightingales Luscinia megarhynchos

    IBIS, Issue 2 2010
    CHAS A. HOLT
    The ecological impacts of increasing populations of deer (Cervidae) in Europe and North America are becoming more widespread and pronounced. Within Britain, it has been suggested that declines in several woodland bird species, particularly those dependent on dense understorey vegetation, may be at least partly due to these effects. Here we present experimental evidence of the effects of deer browsing on the fine-scale habitat selection and habitat use by a bird species in Europe. The study was conducted in a wood in eastern England where a decrease in Common Nightingale Luscinia megarhynchos numbers has coincided with a large increase in deer numbers. Eight woodland plots were cut to produce young coppice regrowth (a favoured habitat for Nightingales). Deer were excluded from half of each plot using steel fences, thus creating eight experimental pairs of exclosures (unbrowsed) and controls (browsed). Radiotelemetry and territory mapping of male Nightingales showed strong selection of exclosures. The density of territories was 15 times greater in the exclosures than in grazed controls. Selection for exclosures was significant for the minimum convex polygon, 95% kernel and 50% core home-ranges used by seven radiotracked males. Tracked birds spent 69% of their time in the 6% of the study area protected from deer. Intensified browsing by deer influenced local settlement patterns of Nightingales, supporting the conclusion that increased deer populations are likely to have contributed to declines of Nightingales in Britain, and potentially those of other bird species dependent on dense understorey. [source]


    Vibration analysis of conical panels using the method of discrete singular convolution

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2008
    Ömer CivalekArticle first published online: 16 NOV 200
    Abstract A discrete singular convolution (DSC) free vibration analysis of conical panels is presented. Regularized Shannon's delta kernel (RSK) is selected as singular convolution to illustrate the present algorithm. In the proposed approach, the derivatives in both the governing equations and the boundary conditions are discretized by the method of DSC. Effects of boundary conditions, vertex and subtended angle on the frequencies of conical panel are investigated. The effect of the circumferential node number on the vibrational behaviour of the panel is also analysed. The obtained results are compared with those of other numerical methods. Numerical results indicate that the DSC is a simple and reliable method for vibration analysis of conical panels. Copyright © 2006 John Wiley & Sons, Ltd. [source]