Convergence Problems (convergence + problem)

Distribution by Scientific Domains


Selected Abstracts


A fictitious energy approach for shape optimization

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2010
M. Scherer
Abstract This paper deals with shape optimization of continuous structures. As in early works on shape optimization, coordinates of boundary nodes of the FE-domain are directly chosen as design variables. Convergence problems and problems with jagged shapes are eliminated by a new regularization technique: an artificial inequality constraint added to the optimization problem limits a fictitious total strain energy that measures the shape change of the design with respect to a reference design. The energy constraint defines a feasible design space whose size can be varied by one parameter, the upper energy limit. By construction, the proposed regularization is applicable to a wide range of problems; although in this paper, the application is restricted to linear elastostatic problems. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Review of the Integrated Groundwater and Surface-Water Model (IGSM)

GROUND WATER, Issue 2 2003
Eric M. LaBolle
Development of the finite-element-based Integrated Groundwater and Surface-Water Model (IGSM) began in the 1970s. Its popularity grew in the early 1990s with its application to California's Central Valley Groundwater Surface-Water Model in support of the Central Valley Project Improvement Act. Since that time, IGSM has been applied by federal, state, and local agencies to model a number of major basins in California. Our review of the recently released version 5.0 of IGSM reveals a solution methodology that deviates from established solution techniques, potentially compromising its reliability under many circumstances. One difficulty occurs because of the semi-explicit time discretization used. Combined with the fixed monthly time step of IGSM, this approach can prevent applications from accurately converging when using parameter values typically found in nature. Additionally, IGSM fails to properly couple and simultaneously solve ground water and surface water models with appropriate mass balance and head convergence under the reasonable conditions considered herein. As a result, IGSM-predicted streamflow is error prone, and errors could exceed 100%. IGSM does not inform the user that there may be a convergence problem with the solution, but instead generally reports good mass balance. Although our review touches on only a few aspects of the code, which exceeds 17,000 lines, our experience is that similar problems arise in other parts of IGSM. Review and examples demonstrate the potential consequences of using the solution methods in IGSM for the prediction, planning, and management of water resources, and provide perspective on the roles of standards and code validation in ground water modeling. [source]


A Fundamental Problem with Amino-Acid-Sequence Characters for Phylogenetic Analyses

CLADISTICS, Issue 3 2000
Mark P. Simmons
Protein-coding genes may be analyzed in phylogenetic analyses using nucleotide-sequence characters and/or amino-acid-sequence characters. Although amino-acid-sequence characters "correct" for saturation (parallelism), amino-acid-sequence characters are subject to convergence and ignore phylogenetically informative variation. When all nucleotide-sequence characters have a consistency index of 1, characters coded using the amino acid sequence may have a consistency index of less than 1. The reason for this is that most amino acids are specified by more than one codon. If two different codons that both code for the same amino acid are derived independent of one another in divergent lineages, nucleotide-sequence characters may not be homoplasious when amino-acid-sequence characters may be homoplasious. Not only may amino-acid-sequence characters support groupings that are not supported by nucleotide-sequence characters, they may support contradictory groupings. Because this convergence is a problem of character delimitation, it affects the results of all tree-construction methods (maximum likelihood, neighbor joining, parsimony, etc.). In effect, coding amino-acid-sequence characters instead of nucleotide-sequence characters putatively corrects for saturation and definitely causes a convergence problem. An empirical example from the Mhc locus is given. [source]


Rigid-plastic/rigid-viscoplastic FE simulation using linear programming for metal forming

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2003
Weili Xu
Abstract For rigid-plastic/rigid-viscoplastic (RP/RVP) FE simulation in metal forming processes, the linear programming (LP) approach has many remarkable advantages, compared with a normal iterative solver. This approach is free from convergence problems and is convenient for dealing with contact surfaces, rigid zones, and friction forces. In this paper, a numerical model for axisymmetrical and plane-strain analysis using RP/RVP and LP is proposed and applied to industrial metal forming. Numerical examples are provided to validate the accuracy and efficiency of the proposed method. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Reliable computing in estimation of variance components

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 6 2008
I. Misztal
Summary The purpose of this study is to present guidelines in selection of statistical and computing algorithms for variance components estimation when computing involves software packages. For this purpose two major methods are to be considered: residual maximal likelihood (REML) and Bayesian via Gibbs sampling. Expectation-Maximization (EM) REML is regarded as a very stable algorithm that is able to converge when covariance matrices are close to singular, however it is slow. However, convergence problems can occur with random regression models, especially if the starting values are much lower than those at convergence. Average Information (AI) REML is much faster for common problems but it relies on heuristics for convergence, and it may be very slow or even diverge for complex models. REML algorithms for general models become unstable with larger number of traits. REML by canonical transformation is stable in such cases but can support only a limited class of models. In general, REML algorithms are difficult to program. Bayesian methods via Gibbs sampling are much easier to program than REML, especially for complex models, and they can support much larger datasets; however, the termination criterion can be hard to determine, and the quality of estimates depends on a number of details. Computing speed varies with computing optimizations, with which some large data sets and complex models can be supported in a reasonable time; however, optimizations increase complexity of programming and restrict the types of models applicable. Several examples from past research are discussed to illustrate the fact that different problems required different methods. [source]


A simplified approach to modeling the co-movement of asset returns

THE JOURNAL OF FUTURES MARKETS, Issue 6 2007
Richard D. F. Harris
The authors propose a simplified multivariate GARCH (generalized autoregressive conditional heteroscedasticity) model (the S-GARCH model), which involves the estimation of only univariate GARCH models, both for the individual return series and for the sum and difference of each pair of series. The covariance between each pair of return series is then imputed from these variance estimates. The proposed model is considerably easier to estimate than existing multivariate GARCH models and does not suffer from the convergence problems that characterize many of these models. Moreover, the model can be easily extended to include more complex dynamics or alternative forms of the GARCH specification. The S-GARCH model is used to estimate the minimum-variance hedge ratio for the FTSE (Financial Times and the London Stock Exchange) 100 Index portfolio, hedged using index futures, and compared to four of the most widely used multivariate GARCH models. Using both statistical and economic evaluation criteria, it was found that the S-GARCH model performs at least as well as the other models that were considered, and in some cases it was better. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:575,598, 2007 [source]


Complementary Log,Log Regression for the Estimation of Covariate-Adjusted Prevalence Ratios in the Analysis of Data from Cross-Sectional Studies

BIOMETRICAL JOURNAL, Issue 3 2009
Alan D. Penman
Abstract We assessed complementary log,log (CLL) regression as an alternative statistical model for estimating multivariable-adjusted prevalence ratios (PR) and their confidence intervals. Using the delta method, we derived an expression for approximating the variance of the PR estimated using CLL regression. Then, using simulated data, we examined the performance of CLL regression in terms of the accuracy of the PR estimates, the width of the confidence intervals, and the empirical coverage probability, and compared it with results obtained from log,binomial regression and stratified Mantel,Haenszel analysis. Within the range of values of our simulated data, CLL regression performed well, with only slight bias of point estimates of the PR and good confidence interval coverage. In addition, and importantly, the computational algorithm did not have the convergence problems occasionally exhibited by log,binomial regression. The technique is easy to implement in SAS (SAS Institute, Cary, NC), and it does not have the theoretical and practical issues associated with competing approaches. CLL regression is an alternative method of binomial regression that warrants further assessment. [source]