Home About us Contact | |||
Realistic Assumptions (realistic + assumption)
Selected AbstractsPricing and Capital Allocation for Multiline Insurance FirmsJOURNAL OF RISK AND INSURANCE, Issue 3 2010Rustam Ibragimov We study multiline insurance companies with limited liability. Insurance premiums are determined by no-arbitrage principles. The results are developed under the realistic assumption that the losses created by insurer default are allocated among policyholders following an,ex post, pro rata, sharing rule. In general, the ratio of default costs to expected claims, and thus the ratio of premiums to expected claims, vary across insurance lines. Moreover, capital and related costs are allocated across lines in proportion to each line's share of a digital default option on the insurer. Our results expand and generalize those derived elsewhere in the literature. [source] Stationary solutions to an energy model for semiconductor devices where the equations are defined on different domainsMATHEMATISCHE NACHRICHTEN, Issue 12 2008Annegret Glitzky Abstract We discuss a stationary energy model from semiconductor modelling. We accept the more realistic assumption that the continuity equations for electrons and holes have to be considered only in a subdomain ,0 of the domain of definition , of the energy balance equation and of the Poisson equation. Here ,0 corresponds to the region of semiconducting material, , \ ,0 represents passive layers. Metals serving as contacts are modelled by Dirichlet boundary conditions. We prove a local existence and uniqueness result for the two-dimensional stationary energy model. For this purpose we derive a W1,p -regularity result for solutions of systems of elliptic equations with different regions of definition and use the Implicit Function Theorem. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Design efficiency for non-market valuation with choice modelling: how to measure it, what to report and why,AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 3 2008Riccardo Scarpa We review the basic principles for the evaluation of design efficiency in discrete choice modelling with a focus on efficiency of WTP estimates from the multinomial logit model. The discussion is developed under the realistic assumption that researchers can plausibly define a prior belief on the range of values for the utility coefficients. D -, A- , B- , S- and C- errors are compared as measures of design performance in applied studies and their rationale is discussed. An empirical example based on the generation and comparison of fifteen separate designs from a common set of assumptions illustrates the relevant considerations to the context of non-market valuation, with particular emphasis placed on C- efficiency. Conclusions are drawn for the practice of reporting in non-market valuation and for future work on design research. [source] Marine reserve effects on fishery profits: a comment on White et al. (2008)ECOLOGY LETTERS, Issue 3 2009Deborah R. Hart Abstract A recent study (White et al. 2008) claimed that fishery profits will often be higher with management that employs no-take marine reserves than conventional fisheries management alone. However, this conclusion was based on the erroneous assumption that all landed fish have equal value regardless of size, and questionable assumptions regarding density-dependence. Examination of an age-structured version of the White et al. (2008) model demonstrates that their results are not robust to these assumptions. Models with more realistic assumptions generally do not indicate increased fishery yield or profits from marine reserves except for overfished stocks. [source] Advanced Experimental and Simulation Approaches to Meet Reliability Challenges of New Electronics SystemsADVANCED ENGINEERING MATERIALS, Issue 4 2009Dietmar Vogel Abstract This paper focuses on some advanced aspects of physics of failure approaches. Tracing of failure modes under realistic loading is a key issue to separate relevant failure sites to be studied in more detail. In the past design of experiment (DoE) tools have been developed to handle this problem. They allow to optimize design and/or material selection with respect to different failure mechanisms and sites. The application of these methods is demonstrated by optimizations performed for fracture problems. Interface fracture has been chosen as one of the most important failure mechanisms. Finally, local stress and strain measurement tools developed over the past years are presented at the end of the paper. They are tools to validate simulation results and therefore the underlying mechanical modeling. Namely, local stress measurement tools under development are needed to make realistic assumptions of loading conditions and to provide residual stress data for FEA. [source] Interest Rate Volatility Prior to Monetary Union under Alternative Pre-Switch RegimesGERMAN ECONOMIC REVIEW, Issue 4 2003Bernd Wilfling Interest rate volatility; term structure; exchange rate arrangements; intervention policy; stochastic processes Abstract. The volatility of interest rates is relevant for many financial applications. Under realistic assumptions the term structure of interest rate differentials provides an important predictor of the term structure of interest rates. This paper derives the term structure of differentials in a situation in which two open economies plan to enter a monetary union in the future. Two systems of floating exchange rates prior to the union are considered, namely a free-float and a managed-float regime. The volatility processes of arbitrary-term differentials under the respective pre-switch arrangements are compared. The paper elaborates the singularity of extremely short-term (i.e. instantaneous) interest rates under extensive leaning-against-the-wind interventions and discusses policy issues. [source] A Decision-Making Framework for Sediment ContaminationINTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 3 2005Peter M. Chapman Abstract A decision-making framework for determining whether or not contaminated sediments are polluted is described. This framework is intended to be sufficiently prescriptive to standardize the decision-making process but without using "cook book" assessments. It emphasizes 4 guidance "rules": (1) sediment chemistry data are only to be used alone for remediation decisions when the costs of further investigation outweigh the costs of remediation and there is agreement among all stakeholders to act; (2) remediation decisions are based primarily on biology; (3) lines of evidence (LOE), such as laboratory toxicity tests and models that contradict the results of properly conducted field surveys, are assumed incorrect; and (4) if the impacts of a remedial alternative will cause more environmental harm than good, then it should not be implemented. Sediments with contaminant concentrations below sediment quality guidelines (SQGs) that predict toxicity to less than 5% of sediment-dwelling infauna and that contain no quantifiable concentrations of substances capable of biomagnifying are excluded from further consideration, as are sediments that do not meet these criteria but have contaminant concentrations equal to or below reference concentrations. Biomagnification potential is initially addressed by conservative (worst case) modeling based on benthos and sediments and, subsequently, by additional food chain data and more realistic assumptions. Toxicity (acute and chronic) and alterations to resident communities are addressed by, respectively, laboratory studies and field observations. The integrative decision point for sediments is a weight of evidence (WOE) matrix combining up to 4 main LOE: chemistry, toxicity, community alteration, and biomagnification potential. Of 16 possible WOE scenarios, 6 result in definite decisions, and 10 require additional assessment. Typically, this framework will be applied to surficial sediments. The possibility that deeper sediments may be uncovered as a result of natural or other processes must also be investigated and may require similar assessment. [source] |