Home About us Contact | |||
Error Analysis (error + analysis)
Kinds of Error Analysis Selected AbstractsSediment budget for an eroding peat-moorland catchment in northern EnglandEARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2005Martin Evans Abstract This paper describes a detailed contemporary sediment budget from a small peat-covered, upland catchment in Upper Teesdale, northern England. The sediment budget was constructed by measuring: (1) sediment transfers on slopes, (2) sediment flux on the floodplain and through the main stream channel and (3) sediment yield at the catchment outlet. Measurements were taken over a four-year monitoring period between July 1997 and October 2001 when interannual variations in runoff were relatively small. Three sites were selected to represent the major erosion subsystems within the catchment: an area of bare peat flats, a pair of peat gullies, and a 300 m channel reach. Collectively the sites allow detailed characterization of the main patterns of sediment flux within the catchment and can be scaled up to provide an estimate of the sediment budget for the catchment as a whole. This constitutes the first attempt to provide a complete description of the functioning of the sediment system in eroding blanket peatlands. Results demonstrate that fluvial suspended sediment flux is controlled to a large degree by channel processes. Gully erosion rates are high but coupling between the slopes and channels is poor and therefore the role of hillslope sediment supply to catchment output is reduced. Consequently contemporary sediment export from the catchment is controlled primarily by in-channel processes. Error analysis of the sediment budgets is used to discuss the limitations of this approach for assessing upland sediment dynamics. A 60 per cent reduction in fluvial suspended sediment yield from Rough Sike over the last 40 years correlates with photographic evidence of significant re-vegetation of gullies over a similar period. This strongly suggests that the reduced sediment yields are a function of increased sediment storage at the slope,channel interface, associated with re-vegetation. Copyright © 2005 John Wiley & Sons, Ltd. [source] Error analysis for the evaluation of model performance: rainfall,runoff event time series dataHYDROLOGICAL PROCESSES, Issue 8 2005Edzer J. Pebesma Abstract This paper provides a procedure for evaluating model performance where model predictions and observations are given as time series data. The procedure focuses on the analysis of error time series by graphing them, summarizing them, and predicting their variability through available information (recalibration). We analysed two rainfall,runoff events from the R-5 data set, and evaluated 12 distinct model simulation scenarios for these events, of which 10 were conducted with the quasi-physically-based rainfall,runoff model (QPBRRM) and two with the integrated hydrology model (InHM). The QPBRRM simulation scenarios differ in their representation of saturated hydraulic conductivity. Two InHM simulation scenarios differ with respect to the inclusion of the roads at R-5. The two models, QPBRRM and InHM, differ strongly in the complexity and number of processes included. For all model simulations we found that errors could be predicted fairly well to very well, based on model output, or based on smooth functions of lagged rainfall data. The errors remaining after recalibration are much more alike in terms of variability than those without recalibration. In this paper, recalibration is not meant to fix models, but merely as a diagnostic tool that exhibits the magnitude and direction of model errors and indicates whether these model errors are related to model inputs such as rainfall. Copyright © 2004 John Wiley & Sons, Ltd. [source] Error analysis and Hertz vector approach for an electromagnetic interaction between a line current and a conducting plateINTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 3 2003M.T. Attaf Abstract In the present paper we first introduce the Hertz vector potential and examine how the specific case of electromagnetic field diffusion problems can be formulated in terms of this potential. Its connection to other commonly used potentials is presented and a basic approach in the form of a suitable set of equations is introduced. The suggested method is then successfully applied to solve the case of an electromagnetic interaction between a straight conductor carrying sinusoidal current and a finite thickness fixed plate. Due to the oscillatory aspect of the integral solution obtained, an appropriate numerical treatment is investigated and various curves are shown to illustrate the convergence behaviour. Copyright © 2003 John Wiley & Sons, Ltd. [source] Error analysis on single sideband modulatorMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 10 2008Lei Chen Abstract This paper analyzes the influence of amplitude and phase error of different path on single sideband modulator (SSB) which can be used as an upconverter. Unwanted sideband suppression expression and error variables equation for maximum output sideband suppression are also deduced. The simulation curve shows the variation of sideband suppression with the amplitude and phase error of the components of SSB is given. Simulation results show that components of the amplitude and phase of the highly consistent or inconsistent amplitude and phase which meet the relationship can achieve higher sideband suppression for the SSB. The completed modulator used as an upper sideband upconverter shows ,11 dB typical conversion gain with 22-dB sideband suppression. © 2008 Wiley Periodicals, Inc. Microwave Opt Technol Lett 50: 2590,2594, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.23721 [source] Error analysis in cross-correlation of sky maps: application to the Integrated Sachs,Wolfe detectionMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007Anna Cabré ABSTRACT Constraining cosmological parameters from measurements of the Integrated Sachs,Wolfe effect requires developing robust and accurate methods for computing statistical errors in the cross-correlation between maps. This paper presents a detailed comparison of such error estimation applied to the case of cross-correlation of cosmic microwave background (CMB) and large-scale structure data. We compare theoretical models for error estimation with Monte Carlo simulations where both the galaxy and the CMB maps vary around a fiducial autocorrelation and cross-correlation model which agrees well with the current concordance , cold dark matter cosmology. Our analysis compares estimators both in harmonic and configuration (or real) space, quantifies the accuracy of the error analysis and discusses the impact of partial sky survey area and the choice of input fiducial model on dark energy constraints. We show that purely analytic approaches yield accurate errors even in surveys that cover only 10 per cent of the sky and that parameter constraints strongly depend on the fiducial model employed. Alternatively, we discuss the advantages and limitations of error estimators that can be directly applied to data. In particular, we show that errors and covariances from the jackknife method agree well with the theoretical approaches and simulations. We also introduce a novel method in real space that is computationally efficient and can be applied to real data and realistic survey geometries. Finally, we present a number of new findings and prescriptions that can be useful for analysis of real data and forecasts, and present a critical summary of the analyses done to date. [source] Error analysis of the L2 least-squares finite element method for incompressible inviscid rotational flows,NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 6 2004Chiung-Chiou Tsai Abstract In this article we analyze the L2 least-squares finite element approximations to the incompressible inviscid rotational flow problem, which is recast into the velocity-vorticity-pressure formulation. The least-squares functional is defined in terms of the sum of the squared L2 norms of the residual equations over a suitable product function space. We first derive a coercivity type a priori estimate for the first-order system problem that will play the crucial role in the error analysis. We then show that the method exhibits an optimal rate of convergence in the H1 norm for velocity and pressure and a suboptimal rate of convergence in the L2 norm for vorticity. A numerical example in two dimensions is presented, which confirms the theoretical error estimates. © 2004 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2004 [source] Error analysis of proper motions in decination obtained for 807 Hipparcos stars from PZT observations over many decadesASTRONOMISCHE NACHRICHTEN, Issue 8 2010G. Damljanovi Abstract After publication of the Hipparcos catalogue (in 1997), a few new astrometric catalogues have appeared (TYCHO-2, ARIHIP, etc.), as a good combination of the Hipparcos satellite and ground-based data, to get more accurate coordinates and proper motions of stars than the Hipparcos catalogue ones. There are also investigations on improving the Hipparcos coordinates and proper motions by using the astrometric observations of latitude and universal time variations (via observed stars referred to Hipparcos catalogue), together with Hipparcos data, carried out during the last few years. These kind of ground-based data were collected at the end of the last century by J. Vondrák. There are about 4.4 million optical observations made worldwide at 33 observatories and with 47 instruments during 1899.7,1992.0; our Belgrade visual zenith telescope data (for the period 1949.0-1986.0) were included. First of all, these data were used to determine the Earth Orientation Parameters , EOP, but they are also useful for the opposite task , to check the accuracy of coordinates and proper motions of Hipparcos stars which were observed from the ground over many decades. Here, we use the latitude part of ten Photographic Zenith Tubes , PZT data (more than 0.9 million observations made at 6 observatories during the time interval 1915.8,1992.0), and combine them with the Hipparcos catalogue ones, with suitable weights, in order to check the proper motions in declination for 807 common PZT/Hipparcos stars (and to construct the PZT catalogue of ,, for 807 stars). Our standard errors in proper motions in declination of these stars are less than or equal to the Hipparcos ones for 423 stars. The mean value of standard errors of 313 stars observed over more than 20 years by PZT is 0.40 mas/yr. This is 53% of 0.75 mas/yr (the suitable value from the Hipparcos catalogue). We used the Least Squares Method , LSM with the linear model. Our results are in good agreement with the Earth Orientation Catalogue , EOC-2 and the new Hipparcos ones. The main steps of the method and the investigations of systematic errors in determined proper motions (the proper motion differences with respect to the Hipparcos values, the EOC-2 ones and the new Hipparcos ones, as a function of ,, ,, and magnitude) are presented here. A comparison of the four catalogues by pairs shows that there is no significant relationship between the differences of their ,, values and magnitudes and color indices of the common 807 stars. All catalogues have relatively small random and systematic errors which are close to each other. However, the comparison shows that our formal errors are too small. They are underestimated by a factor of nearly 1.7 (for EOC-2, it is 2.0) if we take the new Hipparcos (or Hipparcos) data as reference (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Regionalization of methane emissions in the Amazon Basin with microwave remote sensingGLOBAL CHANGE BIOLOGY, Issue 5 2004John M. Melack Abstract Wetlands of the Amazon River basin are globally significant sources of atmospheric methane. Satellite remote sensing (passive and active microwave) of the temporally varying extent of inundation and vegetation was combined with field measurements to calculate regional rates of methane emission for Amazonian wetlands. Monthly inundation areas for the fringing floodplains of the mainstem Solimőes/Amazon River were derived from analysis of the 37 GHz polarization difference observed by the Scanning Multichannel Microwave Radiometer from 1979 to 1987. L-band synthetic aperture radar data (Japanese Earth Resources Satellite-1) were used to determine inundation and wetland vegetation for the Amazon basin (<500 m elevation) at high (May,June 1996) and low water (October 1995). An extensive set of measurements of methane emission is available from the literature for the fringing floodplains of the central Amazon, segregated into open water, flooded forest and floating macrophyte habitats. Uncertainties in the regional emission rates were determined by Monte Carlo error analyses that combined error estimates for the measurements of emission and for calculations of inundation and habitat areas. The mainstem Solimőes/Amazon floodplain (54,70°W) emitted methane at a mean annual rate of 1.3 Tg C yr,1, with a standard deviation (SD) of the mean of 0.3 Tg C yr,1; 67% of this range in uncertainty is owed to the range in rates of methane emission and 33% is owed to uncertainty in the areal estimates of inundation and vegetative cover. Methane emission from a 1.77 million square kilometers area in the central basin had a mean of 6.8 Tg C yr,1 with a SD of 1.3 Tg C yr,1. If extrapolated to the whole basin below the 500 m contour, approximately 22 Tg C yr,1 is emitted; this mean flux has a greenhouse warming potential of about 0.5 Pg C as CO2. Improvement of these regional estimates will require many more field measurements of methane emission, further examination of remotely sensed data for types of wetlands not represented in the central basin, and process-based models of methane production and emission. [source] Hamiltonian-based error computationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2006Y. L. Kuo Abstract This paper presents two sets of the Hamiltonian for checking errors of approximated solutions. The first set can be applied to those problems having any number of independent and dependent variables. This set of the Hamiltonian can effectively indicate the errors of approximated solutions when requiring a high accuracy. The second set of the Hamiltonian has the invariant property when the Lagrangian is not an explicit function of time, even for non-conservative systems. Both sets can be formulated as error indicators to check errors of approximated solutions. Three illustrative examples demonstrate the error analyses of finite element solutions. Copyright © 2005 John Wiley & Sons, Ltd. [source] Error estimates in 2-node shear-flexible beam elementsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 7 2003Gajbir Singh Abstract The objective of the paper is to report the investigation of error estimates/or convergence characteristics of shear-flexible beam elements. The order and magnitude of principal discretization error in the usage of various types beam elements such as: (a) 2-node standard isoparametric element, (b) 2-node field-consistent/reduced integration element and (c) 2-node coupled-displacement field element, is assessed herein. The method employs classical order of error analyses that is commonly used to evaluate the discretization error of finite difference methods. The finite element equilibrium equations at any node are expressed in terms of differential equations through the use of Taylor series. These differential equations are compared with the governing equations and error terms are identified. It is shown that the discretization error in coupled-field elements is the least compared to the field-consistent and standard finite elements (based on exact integration). Copyright © 2003 John Wiley & Sons, Ltd. [source] An adaptive clinical Type 1 diabetes control protocol to optimize conventional self-monitoring blood glucose and multiple daily-injection therapyINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 5 2009Xing-Wei Wong Abstract The objective of this study was to develop a safe, robust and effective protocol for the clinical control of Type 1 diabetes using conventional self-monitoring blood glucose (SMBG) measurements, and multiple daily injection (MDI) with insulin analogues. A virtual patient method is used to develop an in silico simulation tool for Type 1 diabetes using data from a Type 1 diabetes patient cohort (n=40) . The tool is used to test two prandial insulin protocols, an adaptive protocol (AC) and a conventional intensive insulin therapy (IIT) protocol (CC) against results from a representative control cohort as a function of SMBG frequency. With the prandial protocols, optimal and suboptimal basal insulin replacement using a clinically validated, forced-titration regimen is also evaluated. A Monte Carlo (MC) analysis using variability and error distributions derived from the clinical and physiological literature is used to test efficacy and robustness. MC analysis is performed for over 1 400 000 simulated patient hours. All results are compared with control data from which the virtual patients were derived. In conditions of suboptimal basal insulin replacement, the AC protocol significantly decreases HbA1c for SMBG frequencies ,6/day compared with controls and the CC protocol. With optimal basal insulin, mild and severe hypoglycaemia is reduced by 86,100% over controls for all SMBG frequencies. Control with the CC protocol and suboptimal basal insulin replacement saturates at an SMBG frequency of 6/day. The forced-titration regimen requires a minimum SMBG frequency of 6/day to prevent increased hypoglycaemia. Overaggressive basal dose titration with the CC protocol at lower SMBG frequencies is likely caused by uncorrected postprandial hyperglycaemia from the previous night. From the MC analysis, a defined peak in control is achieved at an SMBG frequency of 8/day. However, 90% of the cohort meets American Diabetes Association recommended HbA1c with just 2 measurements a day. A further 7.5% requires 4 measurements a day and only 2.5% (1 patient) required 6 measurements a day. In safety, the AC protocol is the most robust to applied MC error. Over all SMBG frequencies, the median for severe hypoglycaemia increases from 0 to 0.12% and for mild hypoglycaemia by 0,5.19% compared with the unrealistic no error simulation. While statistically significant, these figures are still very low and the distributions are well below those of the controls group. An adaptive control protocol for Type 1 diabetes is tested in silico under conditions of realistic variability and error. The adaptive (AC) protocol is effective and safe compared with conventional IIT (CC) and controls. As the fear of hypoglycaemia is a large psychological barrier to appropriate glycaemic control, adaptive model-based protocols may represent the next evolution of IIT to deliver increased glycaemic control with increased safety over conventional methods, while still utilizing the most commonly used forms of intervention (SMBG and MDI). The use of MC methods to evaluate them provides a relevant robustness test that is not considered in the no error analyses of most other studies. Copyright © 2008 John Wiley & Sons, Ltd. [source] Projected slabs: approximation of perspective projection and error analysis,COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2001A. Vilanova Bartrolí Abstract Virtual endoscopy is a promising medical application for volume-rendering techniques where perspective projection is mandatory. Most of the acceleration techniques for direct volume rendering use parallel projection. This paper presents an algorithm to approximate perspective volume rendering using parallel projected slabs. The introduced error due to the approximation is investigated. An analytical study of the maximum and average error is made. This method is applied to VolumePro 500. Based on the error analysis, the basic algorithm is improved. This improvement increases the frame rate, keeping the global maximum error bounded. The usability of the algorithm is shown through the virtual endoscopic investigation of various types of medical data sets. Copyright © 2002 John Wiley & Sons, Ltd. [source] Non-double-couple mechanisms in the seismicity preceding the 1991,1993 Etna volcano eruptionGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2001A. Saraň Summary The temporal evolution of the complete source moment tensor is investigated for 28 earthquakes that occurred at Mt Etna in the period August 1990,December 1991 preceding the biggest eruption of the last three centuries. We perform several tests to check the robustness of the results of inversion considering different frequency ranges and different groups of stations. As well as the selection of good-quality data, the error analysis, statistically significant at the 95 per cent confidence level, is employed to validate the findings of the inversion and to distinguish between physical solutions and artefacts of modelling. For events between 0.3 and 10 km depth, strike-slip mechanisms prevail on normal, inverse and dip-slip mechanisms; this is possibly due to the dyke-induced stress dominating the overall stress field at the surface, producing a continuous switch of the tensile and compressive axes. The regional E,W tension prevails at depth, as indicated by the prevalence of normal mechanisms. An increment of the non-double-couple components is observed immediately before the eruption and can be related to movements of fluids, even though, for some events, the complex interaction between tectonic stress and volcanic activity cannot be excluded. The source time functions retrieved are in general simple and short but some show complexities, as one would expect in volcanic seismicity. From the seismic scalar moment found, we extrapolate an empirical moment,magnitude relation that we compare with other relations proposed for the same area and computed for the duration magnitude and the equivalent Wood,Anderson magnitude. [source] Nutrient fluxes at the river basin scale.HYDROLOGICAL PROCESSES, Issue 5 2001I: the PolFlow model Abstract Human activity has resulted in increased nutrient levels in rivers and coastal seas all over Europe. Models that can describe nutrient fluxes from pollution sources to river outlets may help policy makers to select the most effective source control measures to achieve a reduction of nutrient levels in rivers and coastal seas. Part I of this paper describes the development of such a model: PolFlow. PolFlow was specially designed for operation at the river basin scale and is here applied to model 5-year average nitrogen and phosphorus fluxes in two European river basins (Rhine and Elbe) covering the period 1970,1995. Part II reports an error analysis and model evaluation, and compares PolFlow to simpler alternative models. Copyright © 2001 John Wiley & Sons, Ltd. [source] Solutions of pore pressure build up due to progressive wavesINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2001L. Cheng Abstract The analytical solution of soil pore pressure accumulations due to a progressive wave is examined in detail. First of all, the errors contained in a published analytical solution for wave-induced pore pressure accumulation are addressed, and the correct solution is presented in a more general form. The behaviour of the solution under different soil conditions is then investigated. It is found that the solution for deep soil conditions is sensitive to the soil shear stress in the top thin layer of the soil. However the solution is significantly influenced by the shear stress in the thin layer of soil near the impermeable base, for shallow and finite depth soil conditions. It is also found that a small error in the soil shear stress can lead to a large error in the accumulated pore pressure. An error analysis reveals the relationships between the accuracy of the pore pressure accumulation and the accuracy of the soil shear stress. A numerical solution to the simplified Biot consolidation equation is also developed. It is shown that the error analysis is of significant value for the numerical modelling of pore pressure buildup in marine soils. Both analytical and numerical examples are given to validate the error estimation method proposed in the present paper. Copyright © 2001 John Wiley & Sons, Ltd. [source] Numerical stability and error analysis for the incompressible Navier,Stokes equationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2002S. Prudhomme Abstract This paper describes a strategy to control errors in finite element approximations of the time-dependent incompressible Navier,Stokes equations. The approach involves estimating the errors due to the discretization in space, using information from the residuals in the momentum and continuity equations. Following a numerical stability analysis of channel flows past a cylinder, it is concluded that the errors due to the residual in the continuity equation should be carefully controlled since it appears to be the source of unphysical perturbations artificially created by the spatial discretization. The performance of the adaptive strategy is then tested for lid-driven oblique cavity flows. Copyright © 2002 John Wiley & Sons, Ltd. [source] Decoupling and balancing of space and time errors in the material point method (MPM)INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2010Michael Steffen Abstract The material point method (MPM) is a computationally effective particle method with mathematical roots in both particle-in-cell and finite element-type methods. The method has proven to be extremely useful in solving solid mechanics problems involving large deformations and/or fragmentation of structures, problem domains that are sometimes problematic for finite element-type methods. Recently, the MPM community has focused significant attention on understanding the basic mathematical error properties of the method. Complementary to this thrust, in this paper we show how spatial and temporal errors are typically coupled within the MPM framework. In an attempt to overcome the challenge to analysis that this coupling poses, we take advantage of MPM's connection to finite element methods by developing a ,moving-mesh' variant of MPM that allows us to use finite element-type error analysis to demonstrate and understand the spatial and temporal error behaviors of MPM. We then provide an analysis and demonstration of various spatial and temporal errors in MPM and in simplified MPM-type simulations. Our analysis allows us to anticipate the global error behavior in MPM-type methods and allows us to estimate the time-step where spatial and temporal errors are balanced. Larger time-steps result in solutions dominated by temporal errors and show second-order temporal error convergence. Smaller time-steps result in solutions dominated by spatial errors, and hence temporal refinement produces no appreciative change in the solution. Based upon our understanding of MPM from both analysis and numerical experimentation, we are able to provide to MPM practitioners a collection of guidelines to be used in the selection of simulation parameters that respect the interplay between spatial (grid) resolution, number of particles and time-step. Copyright © 2009 John Wiley & Sons, Ltd. [source] On the a priori and a posteriori error analysis of a two-fold saddle-point approach for nonlinear incompressible elasticityINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2006Gabriel N. Gatica Abstract In this paper, we reconsider the a priori and a posteriori error analysis of a new mixed finite element method for nonlinear incompressible elasticity with mixed boundary conditions. The approach, being based only on the fact that the resulting variational formulation becomes a two-fold saddle-point operator equation, simplifies the analysis and improves the results provided recently in a previous work. Thus, a well-known generalization of the classical Babu,ka,Brezzi theory is applied to show the well-posedness of the continuous and discrete formulations, and to derive the corresponding a priori error estimate. In particular, enriched PEERS subspaces are required for the solvability and stability of the associated Galerkin scheme. In addition, we use the Ritz projection operator to obtain a new reliable and quasi-efficient a posteriori error estimate. Finally, several numerical results illustrating the good performance of the associated adaptive algorithm are presented. Copyright © 2006 John Wiley & Sons, Ltd. [source] Improvement of the asymptotic behaviour of the Euler,Maclaurin formula for Cauchy principal value and Hadamard finite-part integralsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2004U. Jin Choi Abstract In the recent works (Commun. Numer. Meth. Engng 2001; 17: 881; to appear), the superiority of the non-linear transformations containing a real parameter b , 0 has been demonstrated in numerical evaluation of weakly singular integrals. Based on these transformations, we define a so-called parametric sigmoidal transformation and employ it to evaluate the Cauchy principal value and Hadamard finite-part integrals by using the Euler,Maclaurin formula. Better approximation is expected due to the prominent properties of the parametric sigmoidal transformation of whose local behaviour near x = 0 is governed by parameter b. Through the asymptotic error analysis of the Euler,Maclaurin formula using the parametric sigmoidal transformation, we can observe that it provides a distinct improvement on its predecessors using traditional sigmoidal transformations. Numerical results of some examples show the availability of the present method. Copyright © 2004 John Wiley & Sons, Ltd. [source] Stability of linear time-periodic delay-differential equations via Chebyshev polynomialsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2004Eric A. Butcher Abstract This paper presents a new technique for studying the stability properties of dynamic systems modeled by delay-differential equations (DDEs) with time-periodic parameters. By employing a shifted Chebyshev polynomial approximation in each time interval with length equal to the delay and parametric excitation period, the dynamic system can be reduced to a set of linear difference equations for the Chebyshev expansion coefficients of the state vector in the previous and current intervals. This defines a linear map which is the ,infinite-dimensional Floquet transition matrix U'. Two different formulas for the computation of the approximate U, whose size is determined by the number of polynomials employed, are given. The first one uses the direct integral form of the original system in state space form while the second uses a convolution integral (variation of parameters) formulation. Additionally, a variation on the former method for direct application to second-order systems is also shown. An error analysis is presented which allows the number of polynomials employed in the approximation to be selected in advance for a desired tolerance. An extension of the method to the case where the delay and parametric periods are commensurate is also shown. Stability charts are produced for several examples of time-periodic DDEs, including the delayed Mathieu equation and a model for regenerative chatter in impedance-modulated turning. The results indicate that this method is an effective way to study the stability of time-periodic DDEs. Copyright © 2004 John Wiley & Sons, Ltd. [source] c-Type method of unified CAMG and FEA.INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 20032D non-linear, 3D linear, Part 1: Beam, arch mega-elements Abstract Computer-aided mesh generation (CAMG) dictated solely by the minimal key set of requirements of geometry, material, loading and support condition can produce ,mega-sized', arbitrary-shaped distorted elements. However, this may result in substantial cost saving and reduced bookkeeping for the subsequent finite element analysis (FEA) and reduced engineering manpower requirement for final quality assurance. A method, denoted as c-type, has been proposed by constructively defining a finite element space whereby the above hurdles may be overcome with a minimal number of hyper-sized elements. Bezier (and de Boor) control vectors are used as the generalized displacements and the Bernstein polynomials (and B-splines) as the elemental basis functions. A concomitant idea of coerced parametry and inter-element continuity on demand unifies modelling and finite element method. The c-type method may introduce additional control, namely, an inter-element continuity condition to the existing h-type and p-type methods. Adaptation of the c-type method to existing commercial and general-purpose computer programs based on a conventional displacement-based finite element method is straightforward. The c-type method with associated subdivision technique can be easily made into a hierarchic adaptive computer method with a suitable a posteriori error analysis. In this context, a summary of a geometrically exact non-linear formulation for the two-dimensional curved beams/arches is presented. Several beam problems ranging from truly three-dimensional tortuous linear curved beams to geometrically extremely non-linear two-dimensional arches are solved to establish numerical efficiency of the method. Incremental Lagrangian curvilinear formulation may be extended to overcome rotational singularity in 3D geometric non-linearity and to treat general material non-linearity. Copyright © 2003 John Wiley & Sons, Ltd. [source] A reproducing kernel method with nodal interpolation propertyINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2003Jiun-Shyan Chen Abstract A general formulation for developing reproducing kernel (RK) interpolation is presented. This is based on the coupling of a primitive function and an enrichment function. The primitive function introduces discrete Kronecker delta properties, while the enrichment function constitutes reproducing conditions. A necessary condition for obtaining a RK interpolation function is an orthogonality condition between the vector of enrichment functions and the vector of shifted monomial functions at the discrete points. A normalized kernel function with relative small support is employed as the primitive function. This approach does not employ a finite element shape function and therefore the interpolation function can be arbitrarily smooth. To maintain the convergence properties of the original RK approximation, a mixed interpolation is introduced. A rigorous error analysis is provided for the proposed method. Optimal order error estimates are shown for the meshfree interpolation in any Sobolev norms. Optimal order convergence is maintained when the proposed method is employed to solve one-dimensional boundary value problems. Numerical experiments are done demonstrating the theoretical error estimates. The performance of the method is illustrated in several sample problems. Copyright © 2003 John Wiley & Sons, Ltd. [source] On the classical shell model underlying bilinear degenerated shell finite elements: general shell geometryINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2002Mika Malinen Abstract We study the shell models arising in the numerical modelling of shells by geometrically incompatible finite elements. We build a connection from the so-called bilinear degenerated 3D FEM to the classical 2D shell theory of Reissner,Naghdi type showing how nearly equivalent finite element formulations can be constructed within the classical framework. The connection found here facilitates the mathematical error analysis of the bilinear elements based on the degenerated 3D approach. In particular, the connection reveals the ,secrets' that relate to the treatment of locking effects within this formulation. Copyright © 2002 John Wiley & Sons, Ltd. [source] Incorporating spatially variable bottom stress and Coriolis force into 2D, a posteriori, unstructured mesh generation for shallow water modelsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2009D. Michael Parrish Abstract An enhanced version of our localized truncation error analysis with complex derivatives (LTEA,CD ) a posteriori approach to computing target element sizes for tidal, shallow water flow, LTEA+CD , is applied to the Western North Atlantic Tidal model domain. The LTEA + CD method utilizes localized truncation error estimates of the shallow water momentum equations and builds upon LTEA and LTEA,CD-based techniques by including: (1) velocity fields from a nonlinear simulation with complete constituent forcing; (2) spatially variable bottom stress; and (3) Coriolis force. Use of complex derivatives in this case results in a simple truncation error expression, and the ability to compute localized truncation errors using difference equations that employ only seven to eight computational points. The compact difference molecules allow the computation of truncation error estimates and target element sizes throughout the domain, including along the boundary; this fact, along with inclusion of locally variable bottom stress and Coriolis force, constitute significant advancements beyond the capabilities of LTEA. The goal of LTEA + CD is to drive the truncation error to a more uniform, domain-wide value by adjusting element sizes (we apply LTEA + CD by re-meshing the entire domain, not by moving nodes). We find that LTEA + CD can produce a mesh that is comprised of fewer nodes and elements than an initial high-resolution mesh while performing as well as the initial mesh when considering the resynthesized tidal signals (elevations). Copyright © 2008 John Wiley & Sons, Ltd. [source] Anisotropic mesh adaption for time-dependent problems,INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2008S. Micheletti Abstract We propose a space,time adaptive procedure for a model parabolic problem based on a theoretically sound anisotropic a posteriori error analysis. A space,time finite element scheme (continuous in space but discontinuous in time) is employed to discretize this problem, thus allowing for non-matching meshes at different time levels. Copyright © 2008 John Wiley & Sons, Ltd. [source] Pyramid-based super-resolution of the undersampled and subpixel shifted image sequenceINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2002Yao Lu Abstract The existing methods for the reconstruction of a super-resolution image from the undersampled and subpixel shifted image sequence have to solve a large ill-condition equation group by approximately finding the inverse matrix or performing many iterations to approach the solution. The former leads to a big burden of computation, and the latter causes the artifacts or noise to be overstressed. So they are rarely implemented in practical use. In order to solve these problems, in this article, we consider applying pyramid structure to the super-resolution of the image sequence and present a suitable pyramid framework, called Super-Resolution Image Pyramid (SRIP), and determine the pyramid back-projection. Pyramid structure and methods are widely used in image processing and computer vision. But we have not found their applications to the super-resolution in literatures. We give a complete description for SRIP. As an example, the Iterative Back-Projection (IBP) suggested by Peleg (1991, 1993) is integrated in this pyramid framework. The experiments and the error analysis are performed to show the effectiveness of this framework. The image resolution can be improved better even in the case of severely undersampled images. In addition, the other general super-resolution methods can be easily integrated in this framework so that they can be done in parallel so as to meet the need of real-time processing. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 254,263, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10033 [source] A bioenergetics model for juvenile flounder Platichthys flesusJOURNAL OF APPLIED ICHTHYOLOGY, Issue 1 2006M. Stevens Summary Despite the numerous physiological studies on flatfish and their economic and ecologic importance, only a few attempts have been made to construct a bioenergetics model for these species. Here we present the first bioenergetics model for European flounder (Platichthys flesus), using experimentally derived parameter values. We tested model performance using literature derived field-based estimates of food consumption and growth rates of an estuarine flounder population in the Ythan estuary, Scotland. The model was applied to four age-classes of flounder (age 0,3). Sensitivity of model predictions to parameter perturbation was estimated using error analysis. The fit between observed and predicted series was evaluated using three statistical methods: partitioning mean squared error, a reliability index (RI) and an index of modelling efficiency (MEF). Overall, model predictions closely tracked the observed changes of consumption and growth. The results of the different validation techniques show a high goodness-of-fit between observed and simulated values. The model clearly demonstrates the importance of temperature in determining growth of flounder in the estuary. A sex-specific estimation of the energetic costs of spawning in adult flounder and a more accurate description of the thermal history of the fish may further reduce the error in the model predictions. [source] Temperature and prey quality effects on growth of juvenile walleye pollock Theragra chalcogramma (Pallas): a spatially explicit bioenergetics approachJOURNAL OF FISH BIOLOGY, Issue 3 2007M. M. Mazur A bioenergetics model for juvenile age-0 year walleye pollock Theragra chalcogramma was applied to a spatially distinct grid of samples in the western Gulf of Alaska to investigate the influence of temperature and prey quality on size-specific growth. Daily growth estimates for 50, 70 and 90 mm standard length (LS) walleye pollock during September 2000 were generated using the bioenergetics model with a fixed ration size. Similarities in independent estimates of prey consumption generated from the bioenergetics model and a gastric evacuation model corroborated the performance of the bioenergetics model, concordance correlation (rc) = 0·945, lower 95% CL (transformed) (L1) = 0·834, upper 95% CL (transformed) (L2) = 0·982, P < 0·001. A mean squared error analysis (MSE) was also used to partition the sources of error between both model estimates of consumption into a mean component (MC), slope component (SC), and random component (RC). Differences between estimates of daily consumption were largely due to differences in the means of estimates (MC= 0·45) and random sources (RC= 0·49) of error, and not differences in slopes (SC= 0·06). Similarly, daily growth estimates of 0·031,0·167 g day,1 generated from the bioenergetics model was within the range of growth estimates of 0·026,0·190 g day,1 obtained from otolith analysis of juvenile walleye pollock. Temperature and prey quality alone accounted for 66% of the observed variation between bioenergetics and otolith growth estimates across all sizes of juvenile walleye pollock. These results suggest that the bioenergetics model for juvenile walleye pollock is a useful tool for evaluating the influence of spatially variable habitat conditions on the growth potential of juvenile walleye pollock. [source] Three-dimensional representation of curved nanowiresJOURNAL OF MICROSCOPY, Issue 3 2004Z. HUANG Summary Nanostructures, such as nanowires, nanotubes and nanocoils, can be described in many cases as quasi one-dimensional curved objects projecting in three-dimensional space. A parallax method to construct the correct three-dimensional geometry of such one-dimensional nanostructures is presented. A series of scanning electron microscope images was acquired at different view angles, thus providing a set of image pairs that were used to generate three-dimensional representations using a matlab program. An error analysis as a function of the view angle between the two images is presented and discussed. As an example application, the importance of knowing the true three-dimensional shape of boron nanowires is demonstrated; without the nanowire's correct length and diameter, mechanical resonance data cannot provide an accurate estimate of Young's modulus. [source] A FEM,DtN formulation for a non-linear exterior problem in incompressible elasticityMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 2 2003Gabriel N. Gatica Abstract In this paper, we combine the usual finite element method with a Dirichlet-to-Neumann (DtN) mapping, derived in terms of an infinite Fourier series, to study the solvability and Galerkin approximations of an exterior transmission problem arising in non-linear incompressible 2d-elasticity. We show that the variational formulation can be written in a Stokes-type mixed form with a linear constraint and a non-linear main operator. Then, we provide the uniqueness of solution for the continuous and discrete formulations, and derive a Cea-type estimate for the associated error. In particular, our error analysis considers the practical case in which the DtN mapping is approximated by the corresponding finite Fourier series. Finally, a reliable a posteriori error estimate, well suited for adaptive computations, is also given. Copyright © 2003 John Wiley & Sons, Ltd. [source] |