Local Linearization (local + linearization)

Distribution by Scientific Domains


Selected Abstracts


Post-earthquake bridge repair cost and repair time estimation methodology

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2010
Kevin R. Mackie
Abstract While structural engineers have traditionally focused on individual components (bridges, for example) of transportation networks for design, retrofit, and analysis, it has become increasingly apparent that the economic costs to society after extreme earthquake events are caused at least as much from indirect costs as direct costs due to individual structures. This paper describes an improved methodology for developing probabilistic estimates of repair costs and repair times that can be used for evaluating the performance of new bridge design options and existing bridges in preparation for the next major earthquake. The proposed approach in this paper is an improvement on previous bridge loss modeling studies,it is based on the local linearization of the dependence between repair quantities and damage states so that the resulting model follows a linear relationship between damage states and repair points. The methodology uses the concept of performance groups (PGs) that account for damage and repair of individual bridge components and subassemblies. The method is validated using two simple examples that compare the proposed method to simulation and previous methods based on loss models using a power,law relationship between repair quantities and damage. In addition, an illustration of the method is provided for a complete study on the performance of a common five-span overpass bridge structure in California. Intensity-dependent repair cost ratios (RCRs) and repair times are calculated using the proposed approach, as well as plots that show the disaggregation of repair cost by repair quantity and by PG. This provides the decision maker with a higher fidelity of data when evaluating the contribution of different bridge components to the performance of the bridge system, where performance is evaluated in terms of repair costs and repair times rather than traditional engineering quantities such as displacements and stresses. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A hybrid fast algorithm for first arrivals tomography

GEOPHYSICAL PROSPECTING, Issue 5 2009
Manuela Mendes
ABSTRACT A hybrid algorithm, combining Monte-Carlo optimization with simultaneous iterative reconstructive technique (SIRT) tomography, is used to invert first arrival traveltimes from seismic data for building a velocity model. Stochastic algorithms may localize a point around the global minimum of the misfit function but are not suitable for identifying the precise solution. On the other hand, a tomographic model reconstruction, based on a local linearization, will only be successful if an initial model already close to the best solution is available. To overcome these problems, in the method proposed here, a first model obtained using a classical Monte Carlo-based optimization is used as a good initial guess for starting the local search with the SIRT tomographic reconstruction. In the forward problem, the first-break times are calculated by solving the eikonal equation through a velocity model with a fast finite-difference method instead of the traditional slow ray-tracing technique. In addition, for the SIRT tomography the seismic energy from sources to receivers is propagated by applying a fast Fresnel volume approach which when combined with turning rays can handle models with both positive and negative velocity gradients. The performance of this two-step optimization scheme has been tested on synthetic and field data for building a geologically plausible velocity model. This is an efficient and fast search mechanism, which permits insertion of geophysical, geological and geodynamic a priori constraints into the grid model and ray path is completed avoided. Extension of the technique to 3D data and also to the solution of ,static correction' problems is easily feasible. [source]


A robustness approach to linear control of mildly nonlinear processes

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 13 2007
T. Schweickhardt
Abstract We present a novel approach toward linear control of nonlinear systems. Combining robust control theory and nonlinearity measures, we derive a method to (i) assess the nonlinearity of a given control system, (ii) derive a suitable linear model (not necessarily equivalent to the local linearization), and (iii) design a linear controller that guarantees stability of the closed loop containing the nonlinear process. A distinctive feature of the approach is that the nonlinearity analysis, linear model derivation and linear controller synthesis can be done on an operating regime specified by the designer. Examples are given to illustrate the approach. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Method of lines with boundary elements for 1-D transient diffusion-reaction problems

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2006
P.A. Ramachandran
Abstract Time-dependent differential equations can be solved using the concept of method of lines (MOL) together with the boundary element (BE) representation for the spatial linear part of the equation. The BE method alleviates the need for spatial discretization and casts the problem in an integral format. Hence errors associated with the numerical approximation of the spatial derivatives are totally eliminated. An element level local cubic approximation is used for the variable at each time step to facilitate the time marching and the nonlinear terms are represented in a semi-implicit manner by a local linearization at each time step. The accuracy of the method has been illustrated on a number of test problems of engineering significance. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2006 [source]


Bayesian Analysis of Serial Dilution Assays

BIOMETRICS, Issue 2 2004
Andrew Gelman
Summary. In a serial dilution assay, the concentration of a compound is estimated by combining measurements of several different dilutions of an unknown sample. The relation between concentration and measurement is nonlinear and heteroscedastic, and so it is not appropriate to weight these measurements equally. In the standard existing approach for analysis of these data, a large proportion of the measurements are discarded as being above or below detection limits. We present a Bayesian method for jointly estimating the calibration curve and the unknown concentrations using all the data. Compared to the existing method, our estimates have much lower standard errors and give estimates even when all the measurements are outside the "detection limits." We evaluate our method empirically using laboratory data on cockroach allergens measured in house dust samples. Our estimates are much more accurate than those obtained using the usual approach. In addition, we develop a method for determining the "effective weight" attached to each measurement, based on a local linearization of the estimated model. The effective weight can give insight into the information conveyed by each data point and suggests potential improvements in design of serial dilution experiments. [source]