Home About us Contact | |||
Theorem
Kinds of Theorem Selected AbstractsA NON-SUBSTITUTION THEOREM WITH NON-CONSTANT RETURNS TO SCALE AND EXTERNALITIESMETROECONOMICA, Issue 1 2005Takao Fujimoto ABSTRACT An input,output model with non-constant returns to scale and externalities is presented, and it is shown that in this model the non-substitution theorem is still valid. More precisely, the quantity side of the theorem, i.e. the proposition on efficiency, remains valid, while there can be no equilibrium prices independent of final demand. [source] EXCESS-ENTRY THEOREM: THE IMPLICATIONS OF LICENSING*THE MANCHESTER SCHOOL, Issue 6 2008ARIJIT MUKHERJEE We show that, in the presence of technology licensing, entry in an industry with Cournot competition may lead to a socially insufficient, number of firms. Insufficient entry occurs if the own marginal cost of the entrant is sufficiently high. Hence, the justification for anticompetitive entry regulation due to the standard excess-entry result may not be justified in the presence of licensing. However, if the own marginal cost of the entrant is very low, licensing may create excessive entry for those entry costs where entry does not occur without licensing; thus licensing reduces social welfare though it increases competition. [source] Enhancing Bounding Volumes using Support Plane Mappings for Collision DetectionCOMPUTER GRAPHICS FORUM, Issue 5 2010Athanasios Vogiannou Abstract In this paper we present a new method for improving the performance of the widely used Bounding Volume Hierarchies for collision detection. The major contribution of our work is a culling algorithm that serves as a generalization of the Separating Axis Theorem for non parallel axes, based on the well-known concept of support planes. We also provide a rigorous definition of support plane mappings and implementation details regarding the application of the proposed method to commonly used bounding volumes. The paper describes the theoretical foundation and an overall evaluation of the proposed algorithm. It demonstrates its high culling efficiency and in its application, significant improvement of timing performance with different types of bounding volumes and support plane mappings for rigid body simulations. [source] A Methodology for Assessing Transportation Network Terrorism Risk with Attacker and Defender InteractionsCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2010Pamela M. Murray-Tuite Decision makers need a methodology that can capture the complex attacker,defender interactions and help them understand the overall effects on the transportation system, as well as the consequences of asset failure. This article presents such a methodology, which uses probabilities of target,attack method combinations that are degree of belief based and updated using Bayes' Theorem after evidence of the attack is obtained. Monte Carlo simulation generates the probability of link capacity effects by sampling from distributions of capacity reductions due to preevent security measures, substitutions, target failure, and postevent security measures. The average capacity reduction for a particular target,attack method combination serves as an input to the traffic assignment,simulation package DYNASMART-P to determine travel time effects. The methodology is applied to a sample network based on the northern Virginia area. Results based on notional data indicated that preevent security measures reduced attack probabilities, but in some cases increased the mobility consequences. Thus, decision makers must carefully evaluate the effects of their decisions. [source] Misunderstanding Gödel: New Arguments about Wittgenstein and New Remarks by WittgensteinDIALECTICA, Issue 3 2003Victor Rodych The long-standing issue of Wittgenstein's controversial remarks on Gödel's Theorem has recently heated up in a number of different and interesting directions [(Floyd and Putnam. 2000), (Steiner, 2001), (Floyd, 2001)]. In their (2000), Juliet Floyd and Hilary Putnam purport to argue that Wittgenstein's,notorious'(RFM App. III, §8) "Contains a philosophical claim of great interest," namely, "if one assumed. that ,P is provable in Russell's system one should, give up the "translation" of P by the English sentence ,P is not provable'," because if ,P is provable in PM, PM is , -inconsistent, and if PM is ,-inconsistent, we cannot translate ,P'as 'P is not provable in PM'because the predicate,NaturalNo.(x)'in ,P'"cannot be,interpreted" as "x is a natural number." Though Floyd and Putnam do not clearly distinguish the two tasks, they also argue for "The Floyd-Putnam Thesis," namely, that in the 1930's Wittgenstein had a particular (correct) understanding of Gödel's First Incompleteness Theorem. In this paper, I endeavour to show, first, that the most natural and most defensible interpretation of Wittgenstein's (RFM App. III, §8) and the rest of (RFM App. III) is incompatible with the Floyd-Putnam attribution and, second, that evidence from Wittgenstein's Nachla (i.e., a hitherto unknown "proof sketch" of Gödel's reasoning, Wittgenstein's only mention of ,-inconsistency, and Wittgenstein's only mention of "K provable") strongly indicates that the Floyd- Putnam attribution and the Floyd-Putnam Thesis are false. By way of this examination, we shall see that despite a failure to properly understand Gödel's proof,perhaps because, as Kreisel says, Wittgenstein did not read Gödel's 1931 paper prior to 1942-Wittgenstein's 1937,38, 1941 and 1944 remarks indicate that Gödel's result makes no sense from Wittgenstein's own (idiosyncratic) perspective. [source] Strongly Consistent Self-Confirming EquilibriumECONOMETRICA, Issue 2 2010Yuichiro Kamada Fudenberg and Levine (1993a) introduced the notion of self-confirming equilibrium, which is generally less restrictive than Nash equilibrium. Fudenberg and Levine also defined a concept of consistency, and claimed in their Theorem 4 that with consistency and other conditions on beliefs, a self-confirming equilibrium has a Nash equilibrium outcome. We provide a counterexample that disproves Theorem 4 and prove an alternative by replacing consistency with a more restrictive concept, which we call strong consistency. In games with observed deviators, self-confirming equilibria are strongly consistent self-confirming equilibria. Hence, our alternative theorem ensures that despite the counterexample, the corollary of Theorem 4 is still valid. [source] Bargaining without a Common Prior,An Immediate Agreement TheoremECONOMETRICA, Issue 3 2003Muhamet Yildiz In sequential bargaining models without outside options, each player's bargaining power is ultimately determined by which player will make an offer and when. This paper analyzes a sequential bargaining model in which players may hold different beliefs about which player will make an offer and when. Excessive optimism about making offers in the future can cause delays in agreement. The main result states that, despite this, if players will remain sufficiently optimistic for a sufficiently long future, then in equilibrium they will agree immediately. This result is also extended to other canonical models of optimism. [source] Bayes' Theorem to estimate population prevalence from Alcohol Use Disorders Identification Test (AUDIT) scoresADDICTION, Issue 7 2009David R. Foxcroft ABSTRACT Aim The aim in this methodological paper is to demonstrate, using Bayes' Theorem, an approach to estimating the difference in prevalence of a disorder in two groups whose test scores are obtained, illustrated with data from a college student trial where 12-month outcomes are reported for the Alcohol Use Disorders Identification Test (AUDIT). Method Using known population prevalence as a background probability and diagnostic accuracy information for the AUDIT scale, we calculated the post-test probability of alcohol abuse or dependence for study participants. The difference in post-test probability between the study intervention and control groups indicates the effectiveness of the intervention to reduce alcohol use disorder rates. Findings In the illustrative analysis, at 12-month follow-up there was a mean AUDIT score difference of 2.2 points between the intervention and control groups: an effect size of unclear policy relevance. Using Bayes' Theorem, the post-test probability mean difference between the two groups was 9% (95% confidence interval 3,14%). Interpreted as a prevalence reduction, this is evaluated more easily by policy makers and clinicians. Conclusion Important information on the probable differences in real world prevalence and impact of prevention and treatment programmes can be produced by applying Bayes' Theorem to studies where diagnostic outcome measures are used. However, the usefulness of this approach relies upon good information on the accuracy of such diagnostic measures for target conditions. [source] Eigenvalue analysis of temperature distribution in composite wallsINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 13 2001Galip Oturanç Abstract The transient heat conduction problem in two-layer composite wall is solved analytically using spectral analysis. Eigenvalues and corresponding eigenfunctions of the spectral problem for the temperature distribution in composite walls are analysed using the Rouche Theorem. The number of eigenvalues is obtained and the temperature distribution of this complicated problem is given by a formula with calculated eigenvalues. The analytical solution obtained is in explicit form and provides easy determination of temperature rise in heating and thawing applications of composite materials. Copyright © 2001 John Wiley & Sons, Ltd. [source] Decision making beyond arrow's "impossibility theorem," with the analysis of effects of collusion and mutual attractionINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2009Hung T. Nguyen In 1951, K.J. Arrow proved that, under certain assumptions, it is impossible to have group decision-making rules that satisfy reasonable conditions like symmetry. This Impossibility Theorem is often cited as a proof that reasonable group decision-making is impossible. We start our article by remarking that Arrow's result covers only those situations when the only information we have about individual preferences is their binary preferences between the alternatives. If we follow the main ideas of modern decision making and game theory and also collect information about the preferences between lotteries (i.e., collect the utility values of different alternatives), then reasonable decision-making rules are possible, e.g., Nash's rule in which we select an alternative for which the product of utilities is the largest possible. We also deal with two related issues: how we can detect individual preferences if all we have is preferences of a subgroup and how we take into account the mutual attraction between participants. © 2008 Wiley Periodicals, Inc. [source] Static output feedback sliding mode control for time-varying delay systems with time-delayed nonlinear disturbancesINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2010X. G. Yan Abstract In this paper, a robust stabilization problem for a class of linear time-varying delay systems with disturbances is studied using sliding mode techniques. Both matched and mismatched disturbances, involving time-varying delay, are considered. The disturbances are nonlinear and have nonlinear bounds which are employed for the control design. A sliding surface is designed and the stability of the corresponding sliding motion is analysed based on the Razumikhin Theorem. Then a static output feedback sliding mode control with time delay is synthesized to drive the system to the sliding surface in finite time. Conservatism is reduced by using features of sliding mode control and systems structure. Simulation results show the effectiveness of the proposed approach. Copyright © 2009 John Wiley & Sons, Ltd. [source] An Aggregation Theorem for the Valuation of Equity Under Linear Information DynamicsJOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 3-4 2003David Ashton We state an Aggregation Theorem which shows that the recursion value of equity is functionally proportional to its adaptation value. Since the recursion value of equity is equal to its book value plus the expected present value of its abnormal earnings, it follows that the adaptation value of equity can normally be determined by a process of simple quadrature. We demonstrate the application of the Aggregation Theorem using two stochastic processes. The first uses the linear information dynamics of the Ohlson (1995) model. The second uses linear information dynamics based on the Cox, Ingersoll and Ross (1985),square root' process. Both these processes lead to closed form expressions for the adaptation and overall market value of equity. There are, however, many other processes which are compatible with the Aggregation Theorem. These all show that the market value of equity will be a highly convex function of its recursion value. The empirical evidence we report for UK companies largely supports the convexity hypothesis. [source] The last excluded case of Dirac's map-color theorem for choosabilityJOURNAL OF GRAPH THEORY, Issue 4 2006Daniel Král' Abstract In 1890, Heawood established the upper bound on the chromatic number of every graph embedded on a surface of Euler genus , , 1. Almost 80 years later, the bound was shown to be tight by Ringel and Youngs. These two results have became known under the name of the Map-Color Theorem. In 1956, Dirac refined this by showing that the upper bound H(,) is obtained only if a graph G contains KH, as a subgraph except for three surfaces. Albertson and Hutchinson settled these excluded cases in 1979. This result is nowadays known as Dirac's Map-Color Theorem. Böhme, Mohar, and Stiebitz extended Dirac's Map-Color Theorem to the case of choosability by showing that G is (H(,) , 1)-choosable unless G contains KH(,) as a subgraph for , , 1 and , , 3. In the present paper, we settle the excluded case of , = 3. © 2005 Wiley Periodicals, Inc. J Graph Theory 51: 319,354, 2006 [source] Regular matroid decomposition via signed graphs,JOURNAL OF GRAPH THEORY, Issue 1 2005Jim Geelen Abstract The key to Seymour's Regular Matroid Decomposition Theorem is his result that each 3-connected regular matroid with no R10 - or R12 -minor is graphic or cographic. We present a proof of this in terms of signed graphs. © 2004 Wiley Periodicals, Inc. J Graph Theory 48: 74,84, 2005 [source] Degree sequence conditions for equal edge-connectivity and minimum degree, depending on the clique numberJOURNAL OF GRAPH THEORY, Issue 3 2003Lutz Volkmann Abstract Using the well-known Theorem of Turán, we present in this paper degree sequence conditions for the equality of edge-connectivity and minimum degree, depending on the clique number of a graph. Different examples will show that these conditions are best possible and independent of all the known results in this area. © 2003 Wiley Periodicals, Inc. J Graph Theory 42: 234,245, 2003 [source] Girth and fractional chromatic number of planar graphsJOURNAL OF GRAPH THEORY, Issue 3 2002Amir Pirnazar Abstract In 1959, even before the Four-Color Theorem was proved, Grötzsch showed that planar graphs with girth at least 4 have chromatic number at the most 3. We examine the fractional analogue of this theorem and its generalizations. For any fixed girth, we ask for the largest possible fractional chromatic number of a planar graph with that girth, and we provide upper and lower bounds for this quantity. © 2002 Wiley Periodicals, Inc. J Graph Theory 39: 201,217, 2002; DOI 10.1002/jgt10024 [source] Diffusion of feed spray in fluid catalytic cracker riserAICHE JOURNAL, Issue 4 2010Yiping Fan Abstract For fluid catalytic cracker (FCC) riser reactor, the diffusion pattern of feed spray and the flow features of catalysts in the feed injection zone were investigated in a cold-riser model that is made of 186-mm ID plexiglass pipe. In the feed injection zone, when a feed spray is introduced into the riser, a secondary flow of spray will occur. The secondary flow extends at first and then merges into the mainstream of spray. The occurrence of the secondary flow enhances the mixing of catalysts with feed. However, the extension of the secondary flow causes a violent catalyst backmixing; it is believed to be harmful to FCC reaction. The generation of the secondary flow of feed spray was theoretically analyzed by using the Kutta-Joukowski Lift Theorem. Furthermore, a FCC feed nozzle, which can control/utilize the secondary flow in riser, was proposed. The effects of the nozzle used in some commercial FCC units are quite desirable. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Intermediate Preferences and Behavioral Conformity in Large GamesJOURNAL OF PUBLIC ECONOMIC THEORY, Issue 1 2009GUILHERME CARMONA Motivated by Wooders, Cartwright, and Selten (2006), we consider games with a continuum of players and intermediate preferences. We show that any such game has a Nash equilibrium that induces a partition of the set of attributes into a bounded number of convex sets with the following property: all players with an attribute in the interior of the same element of the partition play the same action. We then use this result to show that all sufficiently large, equicontinuous games with intermediate preferences have an approximate equilibrium with the same property. Our result on behavior conformity for large finite game generalizes Theorem 3 of Wooders et al. (2006) by allowing both a wider class of preferences and a more general attribute space. [source] Nonlinear error correction modelsJOURNAL OF TIME SERIES ANALYSIS, Issue 5 2002ALVARO ESCRIBANO The relationship between cointegration and error correction (EC) models is well characterized in a linear context, but the extension to the nonlinear context is still a challenge. Few extensions of the linear framework have been done in the context of nonlinear error correction (NEC) or asymmetric and time varying error correction models. In this paper, we propose a theoretical framework based on the concept of near epoch dependence (NED) that allows us to formally address these issues. In particular, we partially extend the Granger Representation Theorem to the nonlinear case. [source] DUALITY IN OPTIMAL INVESTMENT AND CONSUMPTION PROBLEMS WITH MARKET FRICTIONSMATHEMATICAL FINANCE, Issue 2 2007I. Klein In the style of Rogers (2001), we give a unified method for finding the dual problem in a given model by stating the problem as an unconstrained Lagrangian problem. In a theoretical part we prove our main theorem, Theorem 3.1, which shows that under a number of conditions the value of the dual and primal problems is equal. The theoretical setting is sufficiently general to be applied to a large number of examples including models with transaction costs, such as Cvitanic and Karatzas (1996) (which could not be covered by the setting in Rogers [2001]). To apply the general result one has to verify the assumptions of Theorem 3.1 for each concrete example. We show how the method applies for two examples, first Cuoco and Liu (1992) and second Cvitanic and Karatzas (1996). [source] The Fundamental Theorem of Asset Pricing under Proportional Transaction Costs in Finite Discrete TimeMATHEMATICAL FINANCE, Issue 1 2004Walter SchachermayerArticle first published online: 24 DEC 200 We prove a version of the Fundamental Theorem of Asset Pricing, which applies to Kabanov's modeling of foreign exchange markets under transaction costs. The financial market is described by a d×d matrix-valued stochastic process (,t)Tt=0 specifying the mutual bid and ask prices between d assets. We introduce the notion of "robust no arbitrage," which is a version of the no-arbitrage concept, robust with respect to small changes of the bid-ask spreads of (,t)Tt=0. The main theorem states that the bid-ask process (,t)Tt=0 satisfies the robust no-arbitrage condition iff it admits a strictly consistent pricing system. This result extends the theorems of Harrison-Pliska and Kabanov-Stricker pertaining to the case of finite ,, as well as the theorem of Dalang, Morton, and Willinger and Kabanov, Rásonyi, and Stricker, pertaining to the case of general ,. An example of a 5 × 5 -dimensional process (,t)2t=0 shows that, in this theorem, the robust no-arbitrage condition cannot be replaced by the so-called strict no-arbitrage condition, thus answering negatively a question raised by Kabanov, Rásonyi, and Stricker. [source] A Fundamental Theorem of Asset Pricing for Large Financial MarketsMATHEMATICAL FINANCE, Issue 4 2000Irene KleinArticle first published online: 25 DEC 200 We formulate the notion of "asymptotic free lunch" which is closely related to the condition "free lunch" of Kreps (1981) and allows us to state and prove a fairly general version of the fundamental theorem of asset pricing in the context of a large financial market as introduced by Kabanov and Kramkov (1994). In a large financial market one considers a sequence (Sn)n=1, of stochastic stock price processes based on a sequence (,n, Fn, (Ftn)t,In, Pn)n=1, of filtered probability spaces. Under the assumption that for all n, N there exists an equivalent sigma-martingale measure for Sn, we prove that there exists a bicontiguous sequence of equivalent sigma-martingale measures if and only if there is no asymptotic free lunch (Theorem 1.1). Moreover we present an example showing that it is not possible to improve Theorem 1.1 by replacing "no asymptotic free lunch" by some weaker condition such as "no asymptotic free lunch with bounded" or "vanishing risk." [source] The LGL (Lighthill,Gueron,Liron) Theorem,historical perspective and critiqueMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 17-18 2001Nadav Liron The 1970s saw a series of works on the modelling of slender bodies moving in slow flow (Re=0), instigated by the interest to understand the principles underlying the swimming of ciliates and flagellates. It was Lighthill in 1975, who wrote down the first theorem connecting slender body motion and singularities distributions along the centre line. This paper will describe the historical development from the early results through Lighthill's theorem to the Gueron,Liron Theorem, which enables discrete-cilia modelling, i.e., modelling of a multitude of slender bodies attached to a surface. Copyright © 2001 John Wiley & Sons, Ltd. [source] Elliptic operators with general Wentzell boundary conditions, analytic semigroups and the angle concavity theoremMATHEMATISCHE NACHRICHTEN, Issue 4 2010Angelo Favini Abstract We prove a very general form of the Angle Concavity Theorem, which says that if (T (t)) defines a one parameter semigroup acting over various Lp spaces (over a fixed measure space), which is analytic in a sector of opening angle ,p, then the maximal choice for ,p is a concave function of 1 , 1/p. This and related results are applied to give improved estimates on the optimal Lp angle of ellipticity for a parabolic equation of the form ,u /,t = Au, where A is a uniformly elliptic second order partial differential operator with Wentzell or dynamic boundary conditions. Similar results are obtained for the higher order equation ,u /,t = (,1)m +lAmu, for all positive integers m. [source] Mod 2 degree and a generalized No Retraction TheoremMATHEMATISCHE NACHRICHTEN, Issue 5-6 2006Ethan D. Bloch Abstract We provide elementary proofs of generalized versions of the No Retraction Theorem and Sperner's Lemma, and a simple definition of mod 2 degree of certain maps. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] An Omitting Types Theorem for first order logic with infinitary relation symbolsMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 6 2007Tarek Sayed Ahmed Abstract In this paper, an extension of first order logic is introduced. In such logics atomic formulas may have infinite lengths. An Omitting Types Theorem is proved. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Computability of compact operators on computable Banach spaces with basesMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 4-5 2007Vasco Brattka Abstract We develop some parts of the theory of compact operators from the point of view of computable analysis. While computable compact operators on Hilbert spaces are easy to understand, it turns out that these operators on Banach spaces are harder to handle. Classically, the theory of compact operators on Banach spaces is developed with the help of the non-constructive tool of sequential compactness. We demonstrate that a substantial amount of this theory can be developed computably on Banach spaces with computable Schauder bases that are well-behaved. The conditions imposed on the bases are such that they generalize the Hilbert space case. In particular, we prove that the space of compact operators on Banach spaces with monotone, computably shrinking, and computable bases is a computable Banach space itself and operations such as composition with bounded linear operators from left are computable. Moreover, we provide a computable version of the Theorem of Schauder on adjoints in this framework and we discuss a non-uniform result on composition with bounded linear operators from right. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Polyadic and cylindric algebras of sentencesMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 5 2006Mohamed Amer Abstract In this note we give an interpretation of cylindric algebras as algebras of sentences (rather than formulas) of first order logic. We show that the isomorphism types of such algebras of sentences coincide with the class of neat reducts of cylindric algebras. Also we show how this interpretation sheds light on some recent results. This is done by likening Henkin's Neat Embedding Theorem to his celebrated completeness proof. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A note on Bar Induction in Constructive Set TheoryMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 3 2006Michael Rathjen Abstract Bar Induction occupies a central place in Brouwerian mathematics. This note is concerned with the strength of Bar Induction on the basis of Constructive Zermelo-Fraenkel Set Theory, CZF. It is shown that CZF augmented by decidable Bar Induction proves the 1-consistency of CZF. This answers a question of P. Aczel who used Bar Induction to give a proof of the Lusin Separation Theorem in the constructive set theory CZF. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Effective Borel measurability and reducibility of functionsMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 1 2005Vasco Brattka Abstract The investigation of computational properties of discontinuous functions is an important concern in computable analysis. One method to deal with this subject is to consider effective variants of Borel measurable functions. We introduce such a notion of Borel computability for single-valued as well as for multi-valued functions by a direct effectivization of the classical definition. On Baire space the finite levels of the resulting hierarchy of functions can be characterized using a notion of reducibility for functions and corresponding complete functions. We use this classification and an effective version of a Selection Theorem of Bhattacharya-Srivastava in order to prove a generalization of the Representation Theorem of Kreitz-Weihrauch for Borel measurable functions on computable metric spaces: such functions are Borel measurable on a certain finite level, if and only if they admit a realizer on Baire space of the same quality. This Representation Theorem enables us to introduce a realizer reducibility for functions on metric spaces and we can extend the completeness result to this reducibility. Besides being very useful by itself, this reducibility leads to a new and effective proof of the Banach-Hausdorff-Lebesgue Theorem which connects Borel measurable functions with the Baire functions. Hence, for certain metric spaces the class of Borel computable functions on a certain level is exactly the class of functions which can be expressed as a limit of a pointwise convergent and computable sequence of functions of the next lower level. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] |