Home About us Contact | |||
Computational Issues (computational + issues)
Selected AbstractsComputational issues in large strain elasto-plasticity: an algorithm for mixed hardening and plastic spinINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2005Francisco Javier Montáns Abstract In this paper an algorithm for large strain elasto-plasticity with isotropic hyperelasticity based on the multiplicative decomposition is formulated. The algorithm includes a (possible) constitutive equation for the plastic spin and mixed hardening in which the principal stress and principal backstress directions are not necessarily preserved. It is shown that if the principal trial stress directions are preserved during the plastic flow (as assumed in some algorithms) a plastic spin is inadvertently introduced for the kinematic/mixed hardening case. If the formulation is performed in the principal stress space, a rotation of the backstress is inadvertently introduced as well. The consistent linearization of the algorithm is also addressed in detail. Copyright © 2005 John Wiley & Sons, Ltd. [source] Estimating time dependent O-D trip tables during peak periodsJOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2000Srinivas S. Pulugurtha Intelligent transportation systems (ITS) have been used to alleviate congestion problems arising due to demand during peak periods. The success of ITS strategies relies heavily on two factors: 1) the ability to accurately estimate the temporal and spatial distribution of travel demand on the transportation network during peak periods, and, 2) providing real-time route guidance to users. This paper addresses the first factor. A model to estimate time dependent origin-destination (O-D) trip tables in urban areas during peak periods is proposed. The daily peak travel period is divided into several time slices to facilitate simulation and modeling. In urban areas, a majority of the trips during peak periods are work trips. For illustration purposes, only peak period work trips are considered in this paper. The proposed methodology is based on the arrival pattern of trips at a traffic analysis zone (TAZ) and the distribution of their travel times. The travel time matrix for the peak period, the O-D trip table for the peak period, and the number of trips expected to arrive at each TAZ at different work start times are inputs to the model. The model outputs are O-D trip tables for each time slice in the peak period. 1995 data for the Las Vegas metropolitan area are considered for testing and validating the model, and its application. The model is reasonably robust, but some lack of precision was observed. This is due to two possible reasons: 1) rounding-off, and, 2) low ratio of total number of trips to total number of O-D pair combinations. Hence, an attempt is made to study the effect of increasing this ratio on error estimates. The ratio is increased by multiplying each O-D pair trip element with a scaling factor. Better estimates were obtained. Computational issues involved with the simulation and modeling process are discussed. [source] Multiple-relaxation-time lattice Boltzmann computation of channel flow past a square cylinder with an upstream control bi-partitionINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2010M. A. Moussaoui Abstract The present paper deals with the application of the multiple-relaxation-time lattice Boltzmann equation (MRT-LBE) for the simulation of a channel flow with a bi-partition located upstream of a square cylinder in order to control the flow. Numerical investigations have been carried out for different heights and positions of the bi-partition at Reynolds number of 250. Key computational issues involved are the computation of fluid forces acting on the square cylinder, the vortex shedding frequency and the impact of such bluff body on the flow pattern. A particular attention is paid to drag and lift coefficients on the square cylinder. The predicted results from MRT-LBE simulations show that in most cases, the interaction was beneficial insofar as the drag of the square block was lower with the bi-partition than without it. Fluctuating side forces due to vortex shedding from the main body were also reduced for most bi-partition positions. Copyright © 2009 John Wiley & Sons, Ltd. [source] Time asymmetry, nonexponential decay, and complex eigenvalues in the theory and computation of resonance statesINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 2 2002Cleanthes A. Nicolaides Abstract Stationary-state quantum mechanics presents no difficulties in defining and computing discrete excited states because they obey the rules established in the properties of Hilbert space. However, when this idealization has to be abandoned to formulate a theory of excited states dissipating into a continuous spectrum, the problem acquires additional interest in many fields of physics. In this article, the theory of resonances in the continuous spectrum is formulated as a problem of decaying states, whose treatment can entail time-dependent as well as energy-dependent theories. The author focuses on certain formal and computational issues and discusses their application to polyelectronic atomic states. It is argued that crucial to the theory is the understanding and computation of a multiparticle localized wavepacket, ,0, at t = 0, having a real energy E0. Assuming this as the origin, without memory of the excitation process, the author discusses aspects of time-dependent dynamics, for t , 0 as well as for t , ,, and the possible significance of nonexponential decay in the understanding of timeasymmetry. Also discussed is the origin of the complex eigenvalue Schrödinger equation (CESE) satisfied by resonance states and the state-specific methodology for its solution. The complex eigenvalue drives the decay exponentially, with a rate ,, to a good approximation. It is connected to E0 via analytic continuation of the complex self-energy function, A(z), (z is complex), into the second Riemann sheet, or, via the imposition of outgoing wave boundary conditions on the stationary state Schrödinger equation satisfied by the Fano standing wave superposition in the vicinity of E0. If the nondecay amplitude, G(t), is evaluated by inserting the unit operator I = ,dE|E> Strongly absolute stability of Lur'e descriptor systems: Popov-type criteriaINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2009Chunyu Yang Abstract In this paper, we consider the strongly absolute stability problem of Lur'e descriptor systems (LDSs). First, we define a generalized Lur'e Lyapunov function (GLLF) and show that the negative-definite property of the derivative of the GLLF guarantees strongly absolute stability of LDSs. As a result, the existing Popov-type criteria are reduced to sufficient conditions for the existence of the GLLF. Then, we propose a necessary and sufficient condition for the existence of the GLLF to guarantee the strongly absolute stability of LDSs. This criterion is shown to be less conservative than the existing ones. Finally, we discuss the computational issues and present two numerical examples to illustrate the effectiveness of the obtained results. Copyright © 2008 John Wiley & Sons, Ltd. [source] Processing and classification of protein mass spectraMASS SPECTROMETRY REVIEWS, Issue 3 2006Melanie Hilario Abstract Among the many applications of mass spectrometry, biomarker pattern discovery from protein mass spectra has aroused considerable interest in the past few years. While research efforts have raised hopes of early and less invasive diagnosis, they have also brought to light the many issues to be tackled before mass-spectra-based proteomic patterns become routine clinical tools. Known issues cover the entire pipeline leading from sample collection through mass spectrometry analytics to biomarker pattern extraction, validation, and interpretation. This study focuses on the data-analytical phase, which takes as input mass spectra of biological specimens and discovers patterns of peak masses and intensities that discriminate between different pathological states. We survey current work and investigate computational issues concerning the different stages of the knowledge discovery process: exploratory analysis, quality control, and diverse transforms of mass spectra, followed by further dimensionality reduction, classification, and model evaluation. We conclude after a brief discussion of the critical biomedical task of analyzing discovered discriminatory patterns to identify their component proteins as well as interpret and validate their biological implications. © 2006 Wiley Periodicals, Inc., Mass Spec Rev 25:409,449, 2006 [source] On E-Auctions for Procurement OperationsPRODUCTION AND OPERATIONS MANAGEMENT, Issue 4 2007Michael H. Rothkopf Electronic auctions have revolutionized procurement in the last decade. In many situations, they have replaced negotiations for supplier selection and price setting. While they have often greatly reduced transaction costs and increased competition, they have also encountered problems and resistance from suppliers resenting their intrusion on cooperative supplier/buyer relationships. In response to these issues, procurement auctions have evolved in radical new directions. Buyers use business rules to limit adverse changes. Some procurement auctions allow bidders to offer variants in the specifications of products to be supplied. Most important, some suppliers are allowing bidders to bid on packages of items, not just individual items. This tends to change procurement auctions from zero-sum fights over supplier profit margins to win-win searches for synergies. These changes have opened up many new research areas. Researchers are trying to improve how to deal with the computational issues involved in package auctions and to analyze the new auctions forms that are evolving. In general, equilibrium incentives are not known, and dealing with ties in package auctions is an issue. Computer scientists are analyzing the use of computerized bidding agents. Mechanisms that combine auctions with fixed buy prices or with negotiations need to be analyzed. [source] Robust inference in generalized linear models for longitudinal dataTHE CANADIAN JOURNAL OF STATISTICS, Issue 2 2006Sanjoy K. Sinha Abstract The author develops a robust quasi-likelihood method, which appears to be useful for down-weighting any influential data points when estimating the model parameters. He illustrates the computational issues of the method in an example. He uses simulations to study the behaviour of the robust estimates when data are contaminated with outliers, and he compares these estimates to those obtained by the ordinary quasi-likelihood method. Inférence robuste pour des modèles de données longitudinales linéaires généralisés L'auteur développe une méthode de quasi-vraisemblance robuste qui semble utile pour réduire l'impact des points influents sur l'estimation des paramètres d'un modèle. Il illustre les questions de calcul liées à la méthode à l'aide d'un exemple. Il a recours à des simulations pour étudier le comportement des estimations robustes lorsque les données sont contaminées par des valeurs aberrantes et il compare ces estimations à celles obtenues par la méthode de quasi-vraisemblance ordinaire. [source] Optimal Bayesian Design for Patient Selection in a Clinical StudyBIOMETRICS, Issue 3 2009Manuela Buzoianu Summary Bayesian experimental design for a clinical trial involves specifying a utility function that models the purpose of the trial, in this case the selection of patients for a diagnostic test. The best sample of patients is selected by maximizing expected utility. This optimization task poses difficulties due to a high-dimensional discrete design space and, also, to an expected utility formula of high complexity. A simulation-based optimal design method is feasible in this case. In addition, two deterministic algorithms that perform a systematic search over the design space are developed to address the computational issues. [source] Bayesian Prediction of Spatial Count Data Using Generalized Linear Mixed ModelsBIOMETRICS, Issue 2 2002Ole F. Christensen Summary. Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we demonstrate that so-called Langevin-Hastings updates are useful for efficient simulation of the posterior distributions, and we discuss computational issues concerning prediction. [source]
| |