Smoothing

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Smoothing

  • exponential smoothing
  • kernel smoothing
  • spatial smoothing
  • surface smoothing
  • tax smoothing

  • Terms modified by Smoothing

  • smoothing approach
  • smoothing constant
  • smoothing method
  • smoothing methods
  • smoothing parameter
  • smoothing spline
  • smoothing technique

  • Selected Abstracts


    SMOOTHING WITH AN UNKNOWN INITIAL CONDITION

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2003
    Piet De Jong
    Abstract. The smoothing filter is appropriately modified for state space models with an unknown initial condition. Modifications are confined to an initial stretch of the data. An application illustrates procedures. [source]


    EXPONENTIAL SMOOTHING AND NON-NEGATIVE DATA

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
    Muhammad Akram
    Summary The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non-negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non-negative data. We explore exponential smoothing state space models for non-negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non-Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short-term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one-step-ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed. [source]


    PATTERN RECOGNITION VIA ROBUST SMOOTHING WITH APPLICATION TO LASER DATA

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2007
    Carlo Grillenzoni
    Summary Nowadays airborne laser scanning is used in many territorial studies, providing point data which may contain strong discontinuities. Motivated by the need to interpolate such data and preserve their edges, this paper considers robust nonparametric smoothers. These estimators, when implemented with bounded loss functions, have suitable jump-preserving properties. Iterative algorithms are developed here, and are equivalent to nonlinear M-smoothers, but have the advantage of resembling the linear Kernel regression. The selection of their coefficients is carried out by combining cross-validation and robust-tuning techniques. Two real case studies and a simulation experiment confirm the validity of the method; in particular, the performance in building recognition is excellent. [source]


    CEO Risk-Related Incentives and Income Smoothing,

    CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2009
    Julia Grant
    First page of article [source]


    Income Smoothing and Discretionary R&D Expenditures of Japanese Firms,

    CONTEMPORARY ACCOUNTING RESEARCH, Issue 2 2000
    VIVEK MANDE
    Abstract During the recent recession (1991 to present), Japanese firms decreased their spending on R&D for the first time since World War II. The decreases have raised concerns that Japanese managers may be making suboptimal allocations to R&D. We test whether Japanese managers adjust R&D based on short-term performance. Our results show that Japanese firms in several industries adjust their R&D budgets to smooth profits. Interestingly, adjustments to R&D are larger in expansion years. These results, similar to those documented with U.S. managers, point to myopic decision making by Japanese managers. [source]


    Population Ageing, Fiscal Pressure and Tax Smoothing: A CGE Application to Australia,

    FISCAL STUDIES, Issue 2 2006
    Ross Guest
    Abstract This paper analyses the fiscal pressure from population ageing using an intertemporal CGE model, applied to Australia, and compares the results with those of a recent government-commissioned study. The latter study uses an alternative modelling approach based on extrapolation rather than optimising behaviour of consumers and firms. The deadweight losses from the fiscal pressure caused by population ageing are equivalent to an annual loss of consumption of $260 per person per year in 2003 dollars in the balanced-budget scenario. A feasible degree of tax smoothing would reduce this welfare loss by an equivalent of $70 per person per year. Unlike the extrapolation-based model, the CGE approach takes account of feedback effects of ageing-induced tax increases on consumption and labour supply, which in turn impact on the ultimate magnitude of fiscal pressure and therefore tax increases. However, a counterfactual simulation suggests that the difference in terms of deadweight losses between the two modelling approaches is modest, at about $30 per person per year. [source]


    Computation of an unsteady complex geometry flow using novel non-linear turbulence models

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2003
    Paul G. Tucker
    Abstract Non-linear zonal turbulence models are applied to an unsteady complex geometry flow. These are generally found to marginally improve predicted turbulence intensities. However, relative to linear models, convergence is mostly difficult to achieve. Clipping of some non-linear Reynolds stress components is required along with velocity field smoothing or alternative measures. Smoothing is naturally achieved through multilevel convergence restriction operators. As a result of convergence difficulties, generally, non-linear model computational costs detract from accuracy gains. For standard Reynolds stress model results, again computational costs are prohibitive. Also, mean velocity profile data accuracies are found worse than for a simple mixing length model. Of the non-linear models considered, the explicit algebraic stress showed greatest promise with respect to accuracy and stability. However, even this shows around a 30% error in total (the sum of turbulence and unsteadiness) intensity. In strong contradiction to measurements the non-linear and Reynolds models predict quasi-steady flows. This is probably a key reason for the total intensity under-predictions. Use of LES in a non-linear model context might help remedy this modelling aspect. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Smoothing and transporting video in QoS IP networks

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2006
    Khaled Shuaib
    Abstract Real-time traffic such as voice and video, when transported over the Internet, demand stringent quality of service (QoS) requirements. The current Internet as of today is still used as a best effort environment with no quality guarantees. An IP-based Internet that supports different QoS requirements for different applications has been evolving for the past few years. Video streams are bursty in nature due to the instant variability of the video content being encoded. To help mitigate the transport of bursty video streams with minimal loss of information, rate-adaptive shapers (RASs) are usually being used to reduce the burstiness and therefore help preserve the desired quality. When transporting video over a QoS IP network, each stream is classified under a specific traffic profile to which it must conform, to avoid packet loss and picture quality degradation. In this paper we study, evaluate and propose RASs for the transport of video over a QoS IP network. We utilize the encoding video parameters for choosing the appropriate configuration needed to support the real-time transport of Variable Bit Rate (VBR) encoded video streams. The performance evaluation of the different RASs is based on the transport of MPEG-4 video streams encoded as VBR. The performance is studied based on looking at the effect of various parameters associated with the RASs on the effectiveness of smoothing out the burstiness of video and minimizing the probability of packet loss. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Forecasting with Exponential Smoothing: The State Space Approach by Rob J. Hyndman, Anne B. Koehler, J. Keith Ord, Ralph D. Snyder

    INTERNATIONAL STATISTICAL REVIEW, Issue 2 2009
    David J. Hand
    No abstract is available for this article. [source]


    Smoothing that does not blur: Effects of the anisotropic approach for evaluating diffusion tensor imaging data in the clinic

    JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 3 2010
    Marta Moraschi MS
    Abstract Purpose: To compare the effects of anisotropic and Gaussian smoothing on the outcomes of diffusion tensor imaging (DTI) voxel-based (VB) analyses in the clinic, in terms of signal-to-noise ratio (SNR) enhancement and directional information and boundary structures preservation. Materials and Methods: DTI data of 30 Alzheimer's disease (AD) patients and 30 matched control subjects were obtained at 3T. Fractional anisotropy (FA) maps with variable degrees and quality (Gaussian and anisotropic) of smoothing were created and compared with an unsmoothed dataset. The two smoothing approaches were evaluated in terms of SNR improvements, capability to separate differential effects between patients and controls by a standard VB analysis, and level of artifacts introduced by the preprocessing. Results: Gaussian smoothing regionally biased the FA values and introduced a high variability of results in clinical analysis, greatly dependent on the kernel size. On the contrary, anisotropic smoothing proved itself capable of enhancing the SNR of images and maintaining boundary structures, with only moderate dependence of results on smoothing parameters. Conclusion: Our study suggests that anisotropic smoothing is more suitable in DTI studies; however, regardless of technique, a moderate level of smoothing seems to be preferable considering the artifacts introduced by this manipulation. J. Magn. Reson. Imaging 2010;31:690,697. © 2010 Wiley-Liss, Inc. [source]


    3D Surface Smoothing for Arbitrary FE-Meshes by Means of an Enhanced Surface Description

    PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2003
    P. Helnwein Dipl.-Ing.
    This paper describes a novel approach to the problem of 3D surface smoothing by introducing an enhanced surface description of the structural model. The idea of the enhanced surface model is the introduction of an independent vector field for the description of the surface normal field. It is connected to the discretized surface by means of a weak form of the orthogonality conditions. Surface smoothing then is applied element-by-element using the interpolation of the surface patch and the nodal normal vectors. [source]


    Low-Rank Smoothing Splines on Complicated Domains

    BIOMETRICS, Issue 1 2007
    Haonan Wang
    Summary Smoothing over a domain with irregular boundaries or interior gaps and holes is addressed. Consider the problem of estimating mercury in sediment concentrations in the estuarine waters in New Hampshire. A modified version of low-rank thin plate splines (LTPS) is introduced where the geodesic distance is applied to evaluate dissimilarity of any two data observations: loosely speaking, distances between locations are not measured as the crow flies, but as the fish swims. The method is compared with competing smoothing techniques, LTPS, and finite element L-splines. [source]


    Freeform Shape Representations for Efficient Geometry Processing

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Leif Kobbelt
    The most important concepts for the handling and storage of freeform shapes in geometry processing applications are parametric representations and volumetric representations. Both have their specific advantages and drawbacks. While the algebraic complexity of volumetric representations is independent from the shape complexity, the domain of a parametric representation usually has to have the same structure as the surface itself (which sometimes makes it necessary to update the domain when the surface is modified). On the other hand, the topology of a parametrically defined surface can be controlled explicitly while in a volumetric representation, the surface topology can change accidentally during deformation. A volumetric representation reduces distance queries or inside/outside tests to mere function evaluations but the geodesic neighborhood relation between surface points is difficult to resolve. As a consequence, it seems promising to combine parametric and volumetric representations to effectively exploit both advantages. In this talk, a number of projects are presented and discussed in which such a combination leads to efficient and numerically stable algorithms for the solution of various geometry processing tasks. Applications include global error control for mesh decimation and smoothing, topology control for level-set surfaces, and shape modeling with unstructured point clouds. [source]


    Gradient Estimation in Volume Data using 4D Linear Regression

    COMPUTER GRAPHICS FORUM, Issue 3 2000
    László Neumann
    In this paper a new gradient estimation method is presented which is based on linear regression. Previous contextual shading techniques try to fit an approximate function to a set of surface points in the neighborhood of a given voxel. Therefore a system of linear equations has to be solved using the computationally expensive Gaussian elimination. In contrast, our method approximates the density function itself in a local neighborhood with a 3D regression hyperplane. This approach also leads to a system of linear equations but we will show that it can be solved with an efficient convolution. Our method provides at each voxel location the normal vector and the translation of the regression hyperplane which are considered as a gradient and a filtered density value respectively. Therefore this technique can be used for surface smoothing and gradient estimation at the same time. [source]


    Novel Pretrichial Browlift Technique and Review of Methods and Complications

    DERMATOLOGIC SURGERY, Issue 9 2009
    COURTNEY S. McGUIRE BS
    BACKGROUND The upper third of the face is integral to our perception of youth and beauty. While the eyelids anchor this facial cosmetic unit, the eyebrows and forehead are intrinsically linked to the upper eyelids, and their position and texture play an important role in creating pleasing eyes as well as conveying mood and youth. The most common browlifts are performed with endoscopic visualization. Yet, this technique requires special equipment and a prolonged learning curve. OBJECTIVE To demonstrate a novel pretrichial technique and to review different browlift methods and their potential adverse effects. METHODS Case series and review of the literature. RESULTS The pretrichial browlift results in a mild to moderate browlift with secondary smoothing of the forehead topography. Aside from bruising and swelling, it results in minimal adverse effects. Other techniques are also effective but may create a larger scar such as a direct browlift, may be more difficult in terms of approach such as the browpexy, or require endoscopes. CONCLUSION Browlifts are an important procedure in rejuvenating the upper third of the face and improving the overall facial aesthetic appearance. The pretrichial browlift is a less invasive open technique that is safe and effective for the appropriate patient. [source]


    Differential Long-Term Stimulation of Type I versus Type III Collagen After Infrared Irradiation

    DERMATOLOGIC SURGERY, Issue 7 2009
    YOHEI TANAKA MD
    BACKGROUND The dermis is composed primarily of type I (soft) and type III (rigid scar-like) collagen. Collagen degradation is considered the primary cause of skin aging. Studies have proved the efficacy of infrared irradiation on collagen stimulation but have not investigated the differential long-term effects of infrared irradiation on type I and type III collagen. OBJECTIVE To determine differential long-term stimulation of type I and type III collagen after infrared (1,100,1,800 nm) irradiation. METHODS AND MATERIALS In vivo rat tissue was irradiated using the infrared device. Histology samples were analyzed for type I and III collagen stimulation, visual changes from baseline, and treatment safety up to 90 days post-treatment. RESULTS Infrared irradiation provided long-term stimulation of type I collagen and temporary stimulation of type III collagen. Treatment also created long-term smoothing of the epidermis, with no observed complications. CONCLUSIONS Infrared irradiation provides safe, consistent, long-term stimulation of type I collagen but only short-term stimulation in the more rigid type III collagen. This is preferential for cosmetic patients looking for improvement in laxity and wrinkles while seeking smoother, more youthful skin. [source]


    Effects of terrain smoothing on topographic shielding correction factors for cosmogenic nuclide-derived estimates of basin-averaged denudation rates

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 1 2009
    Kevin P. Norton
    Abstract Estimation of spatially averaged denudation rates from cosmogenic nuclide concentrations in sediments depends on the surface production rates, the scaling methods of cosmic ray intensities, and the correction algorithms for skyline, snow and vegetation shielding used to calculate terrestrial cosmogenic nuclide production. While the calculation of surface nuclide production and application of latitude, altitude and palaeointensity scaling algorithms are subjects of active research, the importance of additional correction for shielding by topographic obstructions, snow and vegetation is the subject of ongoing debate. The derivation of an additional correction factor for skyline shielding for large areas is still problematic. One important issue that has yet to be addressed is the effect of the accuracy and resolution of terrain representation by a digital elevation model (DEM) on topographic shielding correction factors. Topographic metrics scale with the resolution of the elevation data, and terrain smoothing has a potentially large effect on the correction of terrestrial cosmogenic nuclide production rates for skyline shielding. For rough, high-relief landscapes, the effect of terrain smoothing can easily exceed analytical errors, and should be taken into account. Here we demonstrate the effect of terrain smoothing on topographic shielding correction factors for various topographic settings, and introduce an empirical model for the estimation of topographic shielding factors based on landscape metrics. Copyright © 2008 John Wiley and Sons, Ltd. [source]


    A method of new filter design based on the co-occurrence histogram

    ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2009
    Takayuki Fujiwara
    Abstract We have proposed that the co-occurrence frequency image (CFI) based on the co-occurrence frequency histogram of the gray value of an image can be used in a new scheme for image feature extraction. This paper proposes new enhancement filters to achieve sharpening and smoothing of images. These filters are very similar in result but quite different in process from those which have been used previously. Thus, we show the possibility of a new paradigm for basic image enhancement filters making use of the CFI. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 166(1): 36,42, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20699 [source]


    Signal denoising and baseline correction by discrete wavelet transform for microchip capillary electrophoresis

    ELECTROPHORESIS, Issue 18 2003
    Bi-Feng Liu
    Abstract Signal denoising and baseline correction using discrete wavelet transform (DWT) are described for microchip capillary electrophoresis (MCE). DWT was performed on an electropherogram describing a separation of nine tetramethylrohodamine-5-isothiocyanate labeled amino acids, following MCE with laser-induced fluorescence detection, using Daubechies 5 wavelet at a decomposition level of 6. The denoising efficiency was compared with, and proved to be superior to, other commonly used denoising techniques such as Fourier transform, Savitzky-Golay smoothing and moving average, in terms of noise removal and peak preservation by directly visual inspection. Novel strategies for baseline correction were proposed, with a special interest in baseline drift that frequently occurred in chromatographic and electrophoretic separations. [source]


    Variable smoothing in Bayesian intrinsic autoregressions

    ENVIRONMETRICS, Issue 8 2007
    Mark J. Brewer
    Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Comparison of missing value imputation methods for crop yield data

    ENVIRONMETRICS, Issue 4 2006
    Ravindra S. Lokupitiya
    Abstract Most ecological data sets contain missing values, a fact which can cause problems in the analysis and limit the utility of resulting inference. However, ecological data also tend to be spatially correlated, which can aid in estimating and imputing missing values. We compared four existing methods of estimating missing values: regression, kernel smoothing, universal kriging, and multiple imputation. Data on crop yields from the National Agricultural Statistical Survey (NASS) and the Census of Agriculture (Ag Census) were the basis for our analysis. Our goal was to find the best method to impute missing values in the NASS datasets. For this comparison, we selected the NASS data for barley crop yield in 1997 as our reference dataset. We found in this case that multiple imputation and regression were superior to methods based on spatial correlation. Universal kriging was found to be the third best method. Kernel smoothing seemed to perform very poorly. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Methods for the analysis of trends in streamflow response due to changes in catchment condition

    ENVIRONMETRICS, Issue 7 2001
    R. A. Letcher
    Abstract Two algorithms for analysing changes in streamflow response due to changes in land use and farm dam development, based on the Estimated Generalized Least Squares (EGLS) and the Generalized Additive Model (GAM) methods, were compared on three catchments in the Macquarie River Basin in NSW, Australia. In order to account for the influence of climatic conditions on streamflow response, the IHACRES conceptual rainfall-runoff model was calibrated on a daily time step over two-year periods then simulated over the entire period of concurrent rainfall, streamflow and temperature data. Residuals or differences between observed and simulated flows were calculated. The EGLS method was applied to a smoothing of the residual (daily) time series. Such residuals represent the difference between the simulated streamflow response to a fixed catchment condition (in the calibration period) and that due to the actual varying conditions throughout the record period. The GAM method was applied to quarterly aggregated residuals. The methods provided similar qualitative results for trends in residual streamflow response in each catchment for models with a good fitting performance on the calibration period in terms of a number of statistics, i.e. the coefficient of efficiency R2, bias and average relative parameter error (ARPE). It was found that the fit of the IHACRES model to the calibration period is critically important in determining trend values and significance. Models with well identified parameters and less correlation between rainfall and model residuals are likely to give the best results for trend analysis. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    The Kalman filter for the pedologist's tool kit

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 6 2006
    R. Webster
    Summary The Kalman filter is a tool designed primarily to estimate the values of the ,state' of a dynamic system in time. There are two main equations. These are the state equation, which describes the behaviour of the state over time, and the measurement equation, which describes at what times and in what manner the state is observed. For the discrete Kalman filter, discussed in this paper, the state equation is a stochastic difference equation that incorporates a random component for noise in the system and that may include external forcing. The measurement equation is defined such that it can handle indirect measurements, gaps in the sequence of measurements and measurement errors. The Kalman filter operates recursively to predict forwards one step at a time the state of the system from the previously predicted state and the next measurement. Its predictions are optimal in the sense that they have minimum variance among all unbiased predictors, and in this respect the filter behaves like kriging. The equations can also be applied in reverse order to estimate the state variable at all time points from a complete series of measurements, including past, present and future measurements. This process is known as smoothing. This paper describes the ,predictor,corrector' algorithm for the Kalman filter and smoother with all the equations in full, and it illustrates the method with examples on the dynamics of groundwater level in the soil. The height of the water table at any one time depends partly on the height at previous times and partly on the precipitation excess. Measurements of the height of water table and their errors are incorporated into the measurement equation to improve prediction. Results show how diminishing the measurement error increases the accuracy of the predictions, and estimates achieved with the Kalman smoother are even more accurate. Le filtre de Kalman comme outil pour le pédologue Résumé Le filtre de Kalman est un outil conçu essentiellement pour estimer les valeurs de l'état d'un système dynamique dans le temps. Il comprend deux équations principales. Celles-ci sont l'équation d'état, qui décrit l'évolution de l'état pendant le temps, et l'équation de mesure qui decrit à quel instants et de quelle façon on observe l'état. Pour le filtre discret de Kalman, décrit dans cet article, l'équation d'état est une équation stochastique différentielle qui comprend une composante aléatoire pour le bruit dans le système et qui peut inclure une force extérieure. On définit l'équation de mesure de façon à ce qu'elle puisse traiter des mesures indirectes, des vides dans des séquences de mesures et des erreurs de mesure. Le filtre de Kalman fonctionne récursivement pour prédire en avance une démarche à temps l'état du système de la démarche prédite antérieure plus l'observation prochaine. Ses prédictions sont optimales dans le sens qu'elles minimisent la variance parmi toutes les prédictions non-biasées, et à cet égard le filtre se comporte comme le krigeage. On peut appliquer, aussi, les équations dans l'ordre inverse pour estimer la variable d'état à toutes pointes à toutes les instants d'une série complète d'observations, y compris les observations du passé, du présent et du futur. Ce processus est connu comme ,smoothing'. Cet article décrit l'algorithme ,predictor,corrector' du filtre de Kalman et le ,smoother' avec toutes les équations entières. Il illustre cette méthode avec des exemples de la dynamique du niveau de la nappe phréatique dans le sol. Le niveau de la nappe à un instant particulier dépend en partie du niveau aux instants précédents et en partie de l'excès de la précipitation. L'équation d'état fournit la relation générale entre les deux variables et les prédictions. On incorpore les mesures du niveau de la nappe et leurs erreurs pour améliorer les prédictions. Les résultats mettent en évidence que lorsqu'on diminue l'erreur de mesure la précision des prédictions augmente, et aussi que les estimations avec le ,smoother' de Kalman sont encore plus précises. [source]


    Fatigue behaviour of duplex stainless steel reinforcing bars subjected to shot peening

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 7 2009
    E. REAL
    ABSTRACT The influence of shot peening on the fatigue properties of duplex stainless steel reinforcing bars manufactured using both hot- and cold-rolled processes was studied. The S-N curves of the bars before and after the shot-peening process were determined, showing that shot peening improves the fatigue behaviour of the rebars. This improvement is essentially due to the introduction of a compressive residual stress field in the surface of the reinforcing bars, but also to the smoothing of the surface flaws and cold working generated during the manufacturing process. This improvement is much greater in the case of the hot-rolled bars, mainly as a result of their much higher ability for plastic deformation, whereas cold-rolled bars had a much higher hardness. A more severe peening action capable of promoting greater plastic deformation on the bar surface is judged necessary to improve the fatigue resistance of cold-rolled rebars. [source]


    Population Ageing, Fiscal Pressure and Tax Smoothing: A CGE Application to Australia,

    FISCAL STUDIES, Issue 2 2006
    Ross Guest
    Abstract This paper analyses the fiscal pressure from population ageing using an intertemporal CGE model, applied to Australia, and compares the results with those of a recent government-commissioned study. The latter study uses an alternative modelling approach based on extrapolation rather than optimising behaviour of consumers and firms. The deadweight losses from the fiscal pressure caused by population ageing are equivalent to an annual loss of consumption of $260 per person per year in 2003 dollars in the balanced-budget scenario. A feasible degree of tax smoothing would reduce this welfare loss by an equivalent of $70 per person per year. Unlike the extrapolation-based model, the CGE approach takes account of feedback effects of ageing-induced tax increases on consumption and labour supply, which in turn impact on the ultimate magnitude of fiscal pressure and therefore tax increases. However, a counterfactual simulation suggests that the difference in terms of deadweight losses between the two modelling approaches is modest, at about $30 per person per year. [source]


    AN EVALUATION OF SURFACE HARDNESS OF NATURAL AND MODIFIED ROCKS USING SCHMIDT HAMMER: STUDY FROM NORTHWESTERN HIMALAYA, INDIA

    GEOGRAFISKA ANNALER SERIES A: PHYSICAL GEOGRAPHY, Issue 3 2009
    VIKRAM GUPTA
    ABSTRACT. Four rock types (quartz mica gneiss, schist, quartzite and calc-silicate) located in the Satluj and Alaknanda valleys were used to test whether a Schmidt hammer can be used to distinguish rock surfaces affected by various natural and man-induced processes like manual smoothing of rock surfaces by grindstone, surface weathering, deep weathering, fluvial polishing and blasting during road construction. Surfaces polished by fluvial process yielded the highest Schmidt hammer rebound (R-) values and the blast-affected surfaces yielded the lowest R-values for the same rock type. Variations in R-value also reflect the degree of weathering of the rock surfaces. It has been further observed that, for all the rock types, the strength of relationship between R-values for the treated surfaces (manual smoothing of rock surface by grindstone) and the unconfined compressive strength (UCS) is higher than for the fresh natural surfaces. [source]


    Sediment Distribution Around Glacially Abraded Bedrock Landforms (Whalebacks) at Lago Tranquilo, Chile

    GEOGRAFISKA ANNALER SERIES A: PHYSICAL GEOGRAPHY, Issue 3 2005
    Neil F. Glasser
    Whalebacks are convex landforms created by the smoothing of bedrock by glacial processes. Their formation is attributed to glacial abrasion either by bodies of subglacial sediment sliding over bedrock or by individual clasts contained within ice. This paper reports field measurements of sediment depth around two whaleback landforms in order to investigate the relationship between glacigenic deposits and whaleback formation. The study site, at Lago Tranquilo in Chilean Patagonia, is situated within the Last Glacial Maximum (LGM) ice limits. The two whalebacks are separated by intervening depressions in which sediment depths are generally 0.2 to 0.3 m. Two facies occur on and around the whalebacks. These facies are: (1) angular gravel found only on the surface of the whalebacks, interpreted as bedrock fracturing in response to unloading of the rock following pressure release after ice recession, and (2) sandy boulder-gravel in the sediment-filled depressions between the two whalebacks, interpreted as an ice-marginal deposit, with a mixture of sediment types including basal glacial and glaciofluvial sediment. Since the whalebacks have heavily abraded and striated surfaces but are surrounded by only a patchy and discontinuous layer of sediment, the implication is that surface abrasion of the whalebacks was achieved primarily by clasts entrained in basal ice, not by subglacial till sliding. [source]


    Two-dimensional inversion of magnetotelluric data with consecutive use of conjugate gradient and least-squares solution with singular value decomposition algorithms

    GEOPHYSICAL PROSPECTING, Issue 1 2008
    M. Emin Candansayar
    ABSTRACT I investigated the two-dimensional magnetotelluric data inversion algorithms in studying two significant aspects within a linearized inversion approach. The first one is the method of minimization and second one is the type of stabilizing functional used in parametric functionals. The results of two well-known inversion algorithms, namely conjugate gradient and the least-squares solution with singular value decomposition, were compared in terms of accuracy and CPU time. In addition, magnetotelluric data inversion with various stabilizers, such as L2-norm, smoothing, minimum support, minimum gradient support and first-order minimum entropy, were examined. A new inversion algorithm named least-squares solution with singular value decomposition and conjugate gradient is suggested in seeing the outcomes of the comparisons carried out on least-squares solutions with singular value decomposition and conjugate gradient algorithms subject to a variety of stabilizers. Inversion results of synthetic data showed that the newly suggested algorithm yields better results than those of the individual implementations of conjugate gradient and least-squares solution with singular value decomposition algorithms. The suggested algorithm and the above-mentioned algorithms inversion results for the field data collected along a line crossing the North Anatolian Fault zone were also compared each other and results are discussed. [source]


    Interannual to decadal changes in area burned in Canada from 1781 to 1982 and the relationship to Northern Hemisphere land temperatures

    GLOBAL ECOLOGY, Issue 5 2007
    Martin P. Girardin
    ABSTRACT Aim, Temporal variability of annual area burned in Canada (AAB-Can) from ad 1781 to 1982 is inferred from tree-ring width data. Next, correlation analysis is applied between the AAB-Can estimates and Northern Hemisphere (NH) warm season land temperatures to link recent interannual to decadal changes in area burned with large-scale climate variations. The rationale in this use of tree rings is that annual radial increments produced by trees can approximate area burned through sensing climate variations that promote fire activity. Location, The statistical reconstruction of area burned is at the scale of Canada. Methods, The data base of total area burned per year in Canada is used as the predictand. A set of 53 multicentury tree-ring width chronologies distributed across Canada is used as predictors. A linear model relating the predictand to the tree-ring predictors is fitted over the period 1920,82. The regression coefficients estimated for the calibration period are applied to the tree-ring predictors for as far back as 1781 to produce a series of AAB-Can estimates. Results, The AAB-Can estimates account for 44.1% of the variance in the observed data recorded from 1920 to 1982 (92.2% after decadal smoothing) and were verified using a split sample calibration-verification scheme. The statistical reconstruction indicates that the positive trend in AAB-Can from c. 1970,82 was preceded by three decades during which area burned was at its lowest during the past 180 years. Correlation analysis with NH warm season land temperatures from the late 18th century to the present revealed a positive statistical association with these estimates. Main conclusions, As with previous studies, it is demonstrated that the upward trend in AAB-Can is unlikely to be an artefact from changing fire reporting practices and may have been driven by large-scale climate variations. [source]


    The role of the staff MFF in distributing NHS funding: taking account of differences in local labour market conditions

    HEALTH ECONOMICS, Issue 5 2010
    Robert Elliott
    Abstract The National Health Service (NHS) in England distributes substantial funds to health-care providers in different geographical areas to pay for the health care required by the populations they serve. The formulae that determine this distribution reflect populations' health needs and local differences in the prices of inputs. Labour is the most important input and area differences in the price of labour are measured by the Staff Market Forces Factor (MFF). This Staff MFF has been the subject of much debate. Though the Staff MFF has operated for almost 30 years this is the first academic paper to evaluate and test the theory and method that underpin the MFF. The theory underpinning the Staff MFF is the General Labour Market method. The analysis reported here reveals empirical support for this theory in the case of nursing staff employed by NHS hospitals, but fails to identify similar support for its application to medical staff. The paper demonstrates the extent of spatial variation in private sector and NHS wages, considers the choice of comparators and spatial geography, incorporates vacancy modelling and illustrates the effect of spatial smoothing. Copyright © 2009 John Wiley & Sons, Ltd. [source]