Multipliers

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Multipliers

  • fiscal multiplier
  • lagrange multiplier

  • Terms modified by Multipliers

  • multiplier function
  • multiplier method

  • Selected Abstracts


    MIXED INDUSTRIAL STRUCTURE AND SHORT-RUN FISCAL MULTIPLIER

    AUSTRALIAN ECONOMIC PAPERS, Issue 2 2008
    ROBERTO CENSOLO
    Existing studies on the fiscal multiplier under imperfect competition assume a symmetric market structure with identical firms. This paper examines the fiscal policy implications of introducing a multisectoral economy, where a composite commodity is offered in many varieties within a market of monopolistic competition and a homogeneous good is produced in a perfectly competitive environment. Within the context of this mixed industrial structure we show that the size of the short-run multiplier crucially depends on the composition of public expenditure chosen by the government. [source]


    On the Mythology of the Keynesian Multiplier: Unmasking the Myth and the Inadequacies of Some Earlier Criticisms

    AMERICAN JOURNAL OF ECONOMICS AND SOCIOLOGY, Issue 4 2001
    James C. W. Ahiakpor
    Keynes's multiplier story invites acceptance by building on the fact that people typically consume only a fraction of their income and that such purchases are incomes for sellers. By misrepresenting the classical definition of saving and the meaning of Say's Law, Keynes laid the grounds for extolling the virtues of consumption spending as determining income and employment growth. But the mythology of the multiplier story becomes clear when we ask, "From where do people find the means to purchase consumption goods, other than production?" The inadequacies of several earlier criticisms stem from their failure to focus on this fundamental point. [source]


    Generalized Method of Moments With Many Weak Moment Conditions

    ECONOMETRICA, Issue 3 2009
    Whitney K. Newey
    Using many moment conditions can improve efficiency but makes the usual generalized method of moments (GMM) inferences inaccurate. Two-step GMM is biased. Generalized empirical likelihood (GEL) has smaller bias, but the usual standard errors are too small in instrumental variable settings. In this paper we give a new variance estimator for GEL that addresses this problem. It is consistent under the usual asymptotics and, under many weak moment asymptotics, is larger than usual and is consistent. We also show that the Kleibergen (2005) Lagrange multiplier and conditional likelihood ratio statistics are valid under many weak moments. In addition, we introduce a jackknife GMM estimator, but find that GEL is asymptotically more efficient under many weak moments. In Monte Carlo examples we find that t -statistics based on the new variance estimator have nearly correct size in a wide range of cases. [source]


    The Continuing Muddles of Monetary Theory: A Steadfast Refusal to Face Facts

    ECONOMICA, Issue 2009
    C. A. E. GOODHART
    Lionel Robbins was concerned about the methodology of economic science. When he discussed the relationship between theory and ,reality', two of the examples of inappropriate relationships were taken from monetary economics. Such shortcomings continue. Among the worst are: (1) IS/LM: whereby the monetary authorities set the monetary base, and the interest rate is market determined; (2) the monetary base multiplier of bank deposits, and the role of reserve ratios; (3) the current three-equation neoclassical consensus, assuming perfect creditworthiness, and hence no need for banks; (4) the analysis of the evolution of money. [source]


    Laser Ablation (193 nm), Purification and Determination of Very Low Concentrations of Solar Wind Nitrogen Implanted in Targets from the GENESIS Spacecraft

    GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 2 2009
    Laurent Zimmermann
    azote; ablation laser; purification; spectrométrie de masse; mission GENESIS The GENESIS space mission recovered ions emitted by the Sun during a 27 month period. In order to extract, purify and determine the very low quantities of solar nitrogen implanted in the GENESIS targets, a new installation was developed and constructed at the CRPG (Nancy, France). It permitted the simultaneous determination of nitrogen and noble gases extracted from the target by laser ablation. The extraction procedure used a 193 nm excimer laser that allowed for surface contamination in the outer 5 nm to be removed, followed by a step that removed 50 nm of the target material, extracting the solar nitrogen and noble gases implanted in the target. Following purification using Ti and Zr getters for noble gases and a Cu-CuO oxidation cycle for N2, the extracted gases were analysed by static mode (pumps closed) mass spectrometry using electron multiplier and Faraday cup detectors. The nitrogen blanks from the purification section and the static line (30 minutes) were only 0.46 picomole and 0.47 picomole, respectively. La mission GENESIS a récupéré des ions émis par le soleil pendant une période de 27 mois. Afin d'extraire, purifier et analyser de très faibles quantités d'azote solaire implantés dans des cibles GENESIS, une nouvelle installation a été développée et construite au CRPG. Elle a permis l'analyse simultanée de l'azote et des gaz nobles extraits de la couche d'or par ablation. La procédure d'extraction a utilisé un laser Excimer 193 nm qui a permis une étape d'extraction à 5 nm pour éliminer la pollution à la surface, suivie d'une étape qui a extrait jusqu'à une profondeur de 50 nm l'azote et les gaz rares solaires implantés dans la cible. Après une purification à l'aide de getters Ti et Zr pour les gaz rares et un cycle d'oxydation Cu-CuO pour N2, les gaz extraits ont été analysés en mode statique (pompage fermé) par spectrométrie de masse à l'aide d'un multiplicateur d'électrons et d'une cage de Faraday. Les blancs d'azote provenant de la partie purification et de la ligne en statique (30 minutes) étaient de seulement 0.46 et 0.47 picomole, respectivement. [source]


    A simple robust numerical integration algorithm for a power-law visco-plastic model under both high and low rate-sensitivity

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 1 2004
    E. A. de Souza Neto
    Abstract This note describes a simple and extremely robust algorithm for numerical integration of the power-law-type elasto-viscoplastic constitutive model discussed by Peri, (Int. J. Num. Meth. Eng. 1993; 36: 1365,1393). As the rate-independent limit is approached with increasing exponents, the evolution equations of power-law-type models are known to become stiff. Under such conditions, the solution of the implicitly discretized viscoplastic evolution equation cannot be easily obtained by standard root-finding algorithms. Here, a procedure which proves to be remarkably robust under stiff conditions is obtained by means of a simple logarithmic mapping of the basic backward Euler time-discrete equation for the incremental plastic multiplier. The logarithm-transformed equation is solved by the standard Newton,Raphson scheme combined with a simple bisection procedure which ensures that the iterative guesses for the equation unknown (the incremental equivalent plastic strain) remain within the domain where the transformed equation makes sense. The resulting implementation can handle small and large (up to order 106) power-law exponents equally. This allows its effective use under any situation of practical interest, ranging from high rate-sensitivity to virtually rate-independent conditions. The robustness of the proposed scheme is demonstrated by numerical examples. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    An unconditionally convergent algorithm for the evaluation of the ultimate limit state of RC sections subject to axial force and biaxial bending

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2007
    G. Alfano
    Abstract We present a numerical procedure, based upon a tangent approach, for evaluating the ultimate limit state (ULS) of reinforced concrete (RC) sections subject to axial force and biaxial bending. The RC sections are assumed to be of arbitrary polygonal shape and degree of connection; furthermore, it is possible to keep fixed a given amount of the total load and to find the ULS associated only with the remaining part which can be increased by means of a load multiplier. The solution procedure adopts two nested iterative schemes which, in turn, update the current value of the tentative ultimate load and the associated strain parameters. In this second scheme an effective integration procedure is used for evaluating in closed form, as explicit functions of the position vectors of the vertices of the section, the domain integrals appearing in the definition of the tangent matrix and of the stress resultants. Under mild hypotheses, which are practically satisfied for all cases of engineering interest, the existence and uniqueness of the ULS load multiplier is ensured and the global convergence of the proposed solution algorithm to such value is proved. An extensive set of numerical tests, carried out for rectangular, L-shaped and multicell sections shows the effectiveness of the proposed solution procedure. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    On the formulation of closest-point projection algorithms in elastoplasticity,part I: The variational structure

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2002
    F. Armero
    Abstract We present in this paper the characterization of the variational structure behind the discrete equations defining the closest-point projection approximation in elastoplasticity. Rate-independent and viscoplastic formulations are considered in the infinitesimal and the finite deformation range, the later in the context of isotropic finite-strain multiplicative plasticity. Primal variational principles in terms of the stresses and stress-like hardening variables are presented first, followed by the formulation of dual principles incorporating explicitly the plastic multiplier. Augmented Lagrangian extensions are also presented allowing a complete regularization of the problem in the constrained rate-independent limit. The variational structure identified in this paper leads to the proper framework for the development of new improved numerical algorithms for the integration of the local constitutive equations of plasticity as it is undertaken in Part II of this work. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Meshless Galerkin analysis of Stokes slip flow with boundary integral equations

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2009
    Xiaolin Li
    Abstract This paper presents a novel meshless Galerkin scheme for modeling incompressible slip Stokes flows in 2D. The boundary value problem is reformulated as boundary integral equations of the first kind which is then converted into an equivalent variational problem with constraint. We introduce a Lagrangian multiplier to incorporate the constraint and apply the moving least-squares approximations to generate trial and test functions. In this boundary-type meshless method, boundary conditions can be implemented exactly and system matrices are symmetric. Unlike the domain-type method, this Galerkin scheme requires only a nodal structure on the bounding surface of a body for approximation of boundary unknowns. The convergence and abstract error estimates of this new approach are given. Numerical examples are also presented to show the efficiency of the method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Application of second-order adjoint technique for conduit flow problem

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2007
    T. Kurahashi
    Abstract This paper presents the way to obtain the Newton gradient by using a traction given by the perturbation for the Lagrange multiplier. Conventionally, the second-order adjoint model using the Hessian/vector products expressed by the product of the Hessian matrix and the perturbation of the design variables has been researched (Comput. Optim. Appl. 1995; 4:241,262). However, in case that the boundary value would like to be obtained, this model cannot be applied directly. Therefore, the conventional second-order adjoint technique is extended to the boundary value determination problem and the second-order adjoint technique is applied to the conduit flow problem in this paper. As the minimization technique, the Newton-based method is employed. The Broyden,Fletcher,Goldfarb,Shanno (BFGS) method is applied to calculate the Hessian matrix which is used in the Newton-based method and a traction given by the perturbation for the Lagrange multiplier is used in the BFGS method. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Evaluating Specification Tests for Markov-Switching Time-Series Models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2008
    Daniel R. Smith
    C12; C15; C22 Abstract., We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung,Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung,Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar,British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data. [source]


    New Improved Tests for Cointegration with Structural Breaks

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2007
    Joakim Westerlund
    C12; C32; C33 Abstract., This article proposes Lagrange multiplier-based tests for the null hypothesis of no cointegration. The tests are general enough to allow for heteroskedastic and serially correlated errors, deterministic trends, and a structural break of unknown timing in both the intercept and slope. The limiting distributions of the test statistics are derived, and are found to be invariant not only with respect to the trend and structural break, but also with respect to the regressors. A small Monte Carlo study is also conducted to investigate the small-sample properties of the tests. The results reveal that the tests have small size distortions and good power relative to other tests. [source]


    Cost efficiency and value driver analysis of insurers in an emerging economy

    MANAGERIAL AND DECISION ECONOMICS, Issue 4 2009
    Attiea Marie
    This study investigated cost inefficiencies and its relationship with value drivers of insurers in United Arab Emirates (UAE). The study revealed that there were 21,33% cost inefficiencies in these insurers under different model specifications of stochastic frontier and DEA; value drivers such as lower leverage risk, lower capital risk significantly improved cost efficiencies consistent with Basel II norms; ROE positively influenced cost efficiencies with further trade off between increased profit margin, decreased asset utilization and/or reduced equity multiplier by the insurer managements to achieve a target-ROE; and the trend of cost efficiency was improving during 2000,2004. The study suggests that stock insurers could overcome their cost inefficiencies through adoption of efficient measures such as risk mapping of clients, risk prioritization besides ALM techniques. The study has direct implications for individual and institutional investors in making their portfolio investment decisions in insurance sector, policymakers, and regulators to closely monitor inefficient insurers consistent with Basel II norms. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    CONSTANT PROPORTION PORTFOLIO INSURANCE IN THE PRESENCE OF JUMPS IN ASSET PRICES

    MATHEMATICAL FINANCE, Issue 3 2009
    Rama Cont
    Constant proportion portfolio insurance (CPPI) allows an investor to limit downside risk while retaining some upside potential by maintaining an exposure to risky assets equal to a constant multiple of the cushion, the difference between the current portfolio value and the guaranteed amount. Whereas in diffusion models with continuous trading, this strategy has no downside risk, in real markets this risk is nonnegligible and grows with the multiplier value. We study the behavior of CPPI strategies in models where the price of the underlying portfolio may experience downward jumps. Our framework leads to analytically tractable expressions for the probability of hitting the floor, the expected loss, and the distribution of losses. This allows to measure the gap risk but also leads to a criterion for adjusting the multiplier based on the investor's risk aversion. Finally, we study the problem of hedging the downside risk of a CPPI strategy using options. The results are applied to a jump-diffusion model with parameters estimated from returns series of various assets and indices. [source]


    Simulation of Rayleigh waves in cracked plates

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 1 2007
    M. T. Cao
    Abstract The aim of this paper is to develop new numerical procedures to detect micro cracks, or superficial imperfections, in thin plates using excitation by Rayleigh waves. We shall consider a unilateral contact problem between the two sides of the crack in an elastic plate subjected to suitable boundary conditions in order to reproduce a single Rayleigh wave cycle. An approximate solution of this problem will be calculated by using one of the Newmark methods for time discretization and a finite element method for space discretization. To deal with the nonlinearity due to the contact condition, an iterative algorithm involving one multiplier will be used; this multiplier will be approximated by using Newton's techniques. Finally, we will show numerical simulations for both cracked and non-cracked plates. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Uniform stabilization of a one-dimensional hybrid thermo-elastic structure

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 14 2003
    Marié Grobbelaar-Van Dalsen
    Abstract This paper is concerned with the stabilization of a one-dimensional hybrid thermo-elastic structure consisting of an extensible thermo-elastic beam which is hinged at one end with a rigid body attached to its free end. The model takes account of the effect of stretching on bending and rotational inertia. The property of uniform stability of the energy associated with the model is asserted by constructing an appropriate Lyapunov functional for an abstract second order evolution problem. Critical use is made of a multiplier of an operator theoretic nature, which involves the fractional power A,1/2 of the bi-harmonic operator pair A acting in the abstract evolution problem. An explicit decay rate of the energy is obtained. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Product Innovation and Irregular Growth Cycles with Excess Capacity

    METROECONOMICA, Issue 4 2001
    Gang Gong
    A non-linear growth model is presented that provides a solution to Harrod's knife-edge problem within the Keynesian multiplier,accelerator framework. The model introduces product innovation into a conventional investment function. Statistical analysis demonstrates that such an investment function matches the data better than the conventional one. Dynamic analysis shows that irregular growth cycles could occur with excess capacity. [source]


    Frequency-multiplier design using negative-image device models

    MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 11 2010
    Nam-Tae Kim
    Abstract This article presents a novel design methodology for wireless frequency multipliers using negative-image device models applicable to nonlinear devices. Negative-image device models of nonlinear devices are generated by incorporating optimization techniques into a hypothetical negative-image multiplier model. The negative-image device-modeling methodology provides the following advantages over previously developed techniques: (1) It can predict achievable multiplier performance in the device-modeling stage and (2) it provides an accurate starting point for the synthesis of impedance-matching networks. The negative-image device-modeling method is described, and its application to the design of a field-effect transistor (FET) frequency multiplier is presented. Results of an experimental implementation of the multiplier demonstrate the effectiveness of the proposed methodology. © 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52:2544,2548, 2010; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.25521 [source]


    Simultaneous solution of Lagrangean dual problems interleaved with preprocessing for the weight constrained shortest path problem

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2009
    Ranga Muhandiramge
    Abstract Conventional Lagrangean preprocessing for the network Weight Constrained Shortest Path Problem (WCSPP), for example Beasley and Christofides (Beasley and Christofides, Networks 19 (1989), 379,394), calculates lower bounds on the cost of using each node and edge in a feasible path using a single optimal Lagrange multiplier for the relaxation of the WCSPP. These lower bounds are used in conjunction with an upper bound to eliminate nodes and edges. However, for each node and edge, a Lagrangean dual problem exists whose solution may differ from the relaxation of the full problem. Thus, using one Lagrange multiplier does not offer the best possible network reduction. Furthermore, eliminating nodes and edges from the network may change the Lagrangean dual solutions in the remaining reduced network, warranting an iterative solution and reduction procedure. We develop a method for solving the related Lagrangean dual problems for each edge simultaneously which is iterated with eliminating nodes and edges. We demonstrate the effectiveness of our method computationally: we test it against several others and show that it both reduces solve time and the number of intractable problems encountered. We use a modified version of Carlyle and Wood's (38th Annual ORSNZ Conference, Hamilton, New Zealand, November, 2003) enumeration algorithm in the gap closing stage. We also make improvements to this algorithm and test them computationally. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009 [source]


    Algorithms for vector field generation in mass consistent models

    NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2010
    Ciro Flores
    Abstract Diagnostic models in meteorology are based on the fulfillment of some time independent physical constraints as, for instance, mass conservation. A successful method to generate an adjusted wind field, based on mass conservation equation, was proposed by Sasaki and leads to the solution of an elliptic problem for the multiplier. Here we study the problem of generating an adjusted wind field from given horizontal initial velocity data, by two ways. The first one is based on orthogonal projection in Hilbert spaces and leads to the same elliptic problem but with natural boundary conditions for the multiplier. We derive from this approach the so called E,algorithm. An innovative alternative proposal is obtained from a second approach where we consider the saddle,point formulation of the problem,avoiding boundary conditions for the multiplier, and solving this problem by iterative conjugate gradient methods. This leads to an algorithm that we call the CG,algorithm, which is inspired from Glowinsk's approach to solve Stokes,like problems in computational fluid dynamics. Finally, the introduction of new boundary conditions for the multiplier in the elliptic problem generates better adjusted fields than those obtained with the original boundary conditions. © 2009 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2010 [source]


    The boundary element method with Lagrangian multipliers,

    NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 6 2009
    Gabriel N. Gatica
    Abstract On open surfaces, the energy space of hypersingular operators is a fractional order Sobolev space of order 1/2 with homogeneous Dirichlet boundary condition (along the boundary curve of the surface) in a weak sense. We introduce a boundary element Galerkin method where this boundary condition is incorporated via the use of a Lagrangian multiplier. We prove the quasi-optimal convergence of this method (it is slightly inferior to the standard conforming method) and underline the theory by a numerical experiment. The approach presented in this article is not meant to be a competitive alternative to the conforming method but rather the basis for nonconforming techniques like the mortar method, to be developed. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2009 [source]


    On the mixed finite element method with Lagrange multipliers

    NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 2 2003
    Ivo Babu
    Abstract In this note we analyze a modified mixed finite element method for second-order elliptic equations in divergence form. As a model we consider the Poisson problem with mixed boundary conditions in a polygonal domain of R2. The Neumann (essential) condition is imposed here in a weak sense, which yields the introduction of a Lagrange multiplier given by the trace of the solution on the corresponding boundary. This approach allows to handle nonhomogeneous Neumann boundary conditions, theoretically and computationally, in an alternative and usually easier way. Then we utilize the classical Babu,ka-Brezzi theory to show that the resulting mixed variational formulation is well posed. In addition, we use Raviart-Thomas spaces to define the associated finite element method and, applying some elliptic regularity results, we prove the stability, unique solvability, and convergence of this discrete scheme, under appropriate assumptions on the mesh sizes. Finally, we provide numerical results illustrating the performance of the algorithm for smooth and singular problems. © 2003 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 19: 192,210, 2003 [source]


    Refined mixed finite element method for the elasticity problem in a polygonal domain

    NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 3 2002
    M. Farhloul
    Abstract The purpose of this article is to study a mixed formulation of the elasticity problem in plane polygonal domains and its numerical approximation. In this mixed formulation the strain tensor is introduced as a new unknown and its symmetry is relaxed by a Lagrange multiplier, which is nothing else than the rotation. Because of the corner points, the displacement field is not regular in general in the vicinity of the vertices but belongs to some weighted Sobolev space. Using this information, appropriate refinement rules are imposed on the family of triangulations in order to recapture optimal error estimates. Moreover, uniform error estimates in the Lamé coefficient , are obtained for , large. © 2002 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 18: 323,339, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/num.10009 [source]


    OMITTED VARIABLES, CONFIDENCE INTERVALS, AND THE PRODUCTIVITY OF EXCHANGE RATES

    PACIFIC ECONOMIC REVIEW, Issue 1 2007
    Jonathan E. Leightner
    This paper develops confidence intervals for BD-RTPLS and uses BD-RTPLS to estimate the relationship between the exchange rate (e) and gross domestic product (GDP) using annual data from 1984 to 2000 for 23 developing Asian and Pacific countries. BD-RTPLS produces estimates for the exchange rate multiplier (dGDP/de) for these countries and shows how omitted variables affected these multipliers across countries and over time. [source]


    Multiplicative congruential generators, their lattice structure, its relation to lattice,sublattice transformations and applications in crystallography

    ACTA CRYSTALLOGRAPHICA SECTION A, Issue 6 2009
    Wolfgang Hornfeck
    An analysis of certain types of multiplicative congruential generators , otherwise known for their application to the sequential generation of pseudo-random numbers , reveals their relation to the coordinate description of lattice points in two-dimensional primitive sublattices. Taking the index of the lattice,sublattice transformation as the modulus of the multiplicative congruential generator, there are special choices for its multiplier which induce a symmetry-preserving permutation of lattice-point coordinates. From an analysis of similar sublattices with hexagonal and square symmetry it is conjectured that the cycle structure of the permutation has its crystallographic counterpart in the description of crystallographic orbits. Some applications of multiplicative congruential generators in structural chemistry and biology are discussed. [source]


    Testing for stationarity in heterogeneous panel data

    THE ECONOMETRICS JOURNAL, Issue 2 2000
    Kaddour Hadri
    This paper proposes a residual-based Lagrange multiplier (LM) test for a null that the individual observed series are stationary around a deterministic level or around a deterministic trend against the alternative of a unit root in panel data. The tests which are asymptotically similar under the null, belong to the locally best invariant (LBI) test statistics. The asymptotic distributions of the statistics are derived under the null and are shown to be normally distributed. Finite sample sizes and powers are considered in a Monte Carlo experiment. The empirical sizes of the tests are close to the true size even in small samples. The testing procedure is easy to apply, including, to panel data models with fixed effects, individual deterministic trends and heterogeneous errors across cross-sections. It is also shown how to apply the tests to the more general case of serially correlated disturbance terms. [source]


    Monetary Policy Impulses and Retail Interest Rate Pass-Through in Asian Banking Markets

    ASIAN ECONOMIC JOURNAL, Issue 3 2010
    Kuan-Min Wang
    C23; E43; E52; E58; F36 This paper considers the integration of financial markets and mutual influences of monetary policies in the USA and Asia based on monthly data from 1994 to 2007. We used panel-type and time-series and quantile panel-type error correction models to test the influences of expected and unexpected monetary policy impulses on the interest rate pass-through mechanism in the financial markets of 9 Asian countries and the USA. The empirics show that if interest rate integration exists in the financial markets, the following effects are observed: (i) positive impulses of unexpected monetary policy will lead to an increase in the long-run multiplier of the retail interest rate; (ii) the adjustment of retail interest rates with short-run disequilibrium will lead to an increase in the long-run markup; and (iii) the empirical results of quantile regression prove that when the interest variation is greater than the 0.5th quantile and unexpected monetary policy impulses are greater than the expected monetary policy impulses, the short-run interest rate pass-through mechanism becomes more unstable. [source]


    On the Quantile Regression Based Tests for Asymmetry in Stock Return Volatility

    ASIAN ECONOMIC JOURNAL, Issue 2 2002
    Beum-Jo Park
    This paper attempts to examine whether the asymmetry of stock return volatility varies with the level of volatility. Thus, quantile regression based tests (,-tests) are presupposed. These tests differ from the diagnostic tests introduced by Engle and Ng (1993) insofar as they can provide a complete picture of asymmetries in volatility across quantiles of variance distribution and, in case of non-normal errors, they have improved power due to their robustness against non-normality. A small Monte Carlo evidence suggests that the Wald and likelihood ratio (LR) tests out of ,-tests are reasonable, showing that they outperform the Lagrange multiplier (LM) test based on least squares residuals when the innovations exhibit heavy tail. Using the normalized residuals obtained from AR(1)-GARCH(1, 1) estimation, the test results demonstrated that only the TOPIX out of six stock-return series had asymmetry in volatility at moderate level, while all stock return series except the FAZ and FA100 had more significant asymmetry in volatility at higher levels. Interestingly, it is clear from the empirical findings that, like hypothesis of leverage effects, volatility of the TOPIX, CAC40, and, MIB tends to respond significantly to extremely negative shock at high level, but is not correlated with any positive shock. These might be valuable findings that have not been seriously considered in past research, which has focussed only on mean level of volatility. [source]


    Force/motion sliding mode control of three typical mechanisms

    ASIAN JOURNAL OF CONTROL, Issue 2 2009
    Rong-Fong Fung
    Abstract This paper proposes a sliding mode control (SMC) algorithm for trajectory tracking of the slider-crank mechanism, quick-return mechanism, and toggle mechanism. First, the dynamic models suitable for the controls of both the motion and constrained force are derived using Hamilton's principle, the Lagrange multiplier, and implicit function theory. Second, the SMC is designed to ensure the input torques can achieve trajectory tracking on the constrained surfaces with specific constraint forces. Finally, the developed method is successfully verified for effectiveness of the force/motion controls for these three typical mechanisms from the results of simulation. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]


    An estimate of the number of inmate separations from Australian prisons 2000/01 and 2005/06

    AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 3 2010
    Kristy A. Martire
    Abstract Objective: To estimate the annual number of inmate separations from correctional centres in Australia in 2000/01 and 2005/06. Methods: Data on separations were obtained from the websites of each State and Territory government department responsible for prisons. Data on state and national prison population were obtained from the website of the Australian Bureau of Statistics. Three different methods of estimation (multiplier, multiplier adjusted for remand separations and back-projection) were applied to State, Territory and national data on prison population and separations in Australia. Results: The median estimate (to the nearest thousand) of the number of inmate separations was 42,000 in 2000/01 and 44,000 in 2005/06 Conclusions: While the precise figures ought to be interpreted with some caution, our estimates suggest that approximately 44,500 separations from prison occurred in Australia in 2005/06. Each of these separation episodes is accompanied by an elevated risk of mortality; therefore, these figures represent a substantial public health concern. [source]