Closed Form Expression (closed + form_expression)

Distribution by Scientific Domains


Selected Abstracts


F-bar-based linear triangles and tetrahedra for finite strain analysis of nearly incompressible solids.

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2005
Part I: formulation, benchmarking
Abstract This paper proposes a new technique which allows the use of simplex finite elements (linear triangles in 2D and linear tetrahedra in 3D) in the large strain analysis of nearly incompressible solids. The new technique extends the F-bar method proposed by de Souza Neto et al. (Int. J. Solids and Struct. 1996; 33: 3277,3296) and is conceptually very simple: It relies on the enforcement of (near-) incompressibility over a patch of simplex elements (rather than the point-wise enforcement of conventional displacement-based finite elements). Within the framework of the F-bar method, this is achieved by assuming, for each element of a mesh, a modified (F-bar) deformation gradient whose volumetric component is defined as the volume change ratio of a pre-defined patch of elements. The resulting constraint relaxation effectively overcomes volumetric locking and allows the successful use of simplex elements under finite strain near-incompressibility. As the original F-bar procedure, the present methodology preserves the displacement-based structure of the finite element equations as well as the strain-driven format of standard algorithms for numerical integration of path-dependent constitutive equations and can be used regardless of the constitutive model adopted. The new elements are implemented within an implicit quasi-static environment. In this context, a closed form expression for the exact tangent stiffness of the new elements is derived. This allows the use of the full Newton,Raphson scheme for equilibrium iterations. The performance of the proposed elements is assessed by means of a comprehensive set of benchmarking two- and three-dimensional numerical examples. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Utility transversality: a value-based approach

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2005
James E. Matheson
Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Treating missing values in INAR(1) models: An application to syndromic surveillance data

JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2010
Jonas Andersson
Time-series models for count data have found increased interest in recent years. The existing literature refers to the case of data that have been fully observed. In this article, methods for estimating the parameters of the first-order integer-valued autoregressive model in the presence of missing data are proposed. The first method maximizes a conditional likelihood constructed via the observed data based on the k -step-ahead conditional distributions to account for the gaps in the data. The second approach is based on an iterative scheme where missing values are imputed so as to update the estimated parameters. The first method is useful when the predictive distributions have simple forms. We derive in full details this approach when the innovations are assumed to follow a finite mixture of Poisson distributions. The second method is applicable when there are no closed form expression for the conditional likelihood or they are hard to derive. The proposed methods are applied to a dataset concerning syndromic surveillance during the Athens 2004 Olympic Games. [source]


Application of the FCEL method to a microstrip disk antenna with a parasitic director

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 8 2009
T. B. Berbar
Abstract The method of finite coupled elementary lines (FCEL) is used in the analysis of a microstrip disk antenna with a larger parasitic director. A closed form expression of the effective radius for the used structure is proposed. The obtained numerical results are compared with published measurements and good agreement is noticed. © 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 1911,1918, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24506 [source]


Bandpass filter modeling employing Lorentzian distribution

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 5 2009
Mahmoud Al Ahmad
Abstract This letter takes a close outlook of modeling a bandpass filter performance with the Lorentzian distribution function. Lorentzian function parameters are correlated with the filter parameters, namely, its bandwidth and center frequency. The zeros and poles of the filter are extracted from the closed form expression of the Lorentzian function, which is used to construct the rational model of the filter. This procedure is expected to optimize the overall filter performance and to construct a consistent equivalent circuit from its computed poles and zeros. © 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 1167,1169, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24288 [source]


On testing of parameters in modulated power law process

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2001
K. Muralidharan
Abstract The modulated power law process (MPLP) is often used to model failure data from repairable system, when both renewal type behaviour and time trends are present. The MPLP allows for the failure rate of a system to be affected by the failure and repair. Since the MLEs of the estimates do not have closed form expressions, they have to be approximated, and hence deriving a test procedure will be difficult. Black and Rigdon (1996) have proposed asymptotic MLEs and asymptotic likelihood ratio tests for the parameters which also do not have closed form expressions and hence are not easy for application. In this paper, we derive a closed form expression for the test statistics which is simple and easy to apply for testing (i) H0: ,=1 versus H1: ,,1 when , is known and (ii) H0: (,=1 and ,=1) versus H1: (,,1 or ,,1). The simulation study for percentiles and powers are given. We also compare the performance of the test with that of Black and Rigdon's (1996) test. Some numerical examples are also provided to illustrate the testing procedures. Copyright © 2001 John Wiley & Sons, Ltd. [source]