General Results (general + result)

Distribution by Scientific Domains


Selected Abstracts


GMM with Weak Identification

ECONOMETRICA, Issue 5 2000
James H. Stock
This paper develops asymptotic distribution theory for GMM estimators and test statistics when some or all of the parameters are weakly identified. General results are obtained and are specialized to two important cases: linear instrumental variables regression and Euler equations estimation of the CCAPM. Numerical results for the CCAPM demonstrate that weak-identification asymptotics explains the breakdown of conventional GMM procedures documented in previous Monte Carlo studies. Confidence sets immune to weak identification are proposed. We use these results to inform an empirical investigation of various CCAPM specifications; the substantive conclusions reached differ from those obtained using conventional methods. [source]


Hamiltonian view on process systems

AICHE JOURNAL, Issue 8 2001
Katalin M. Hangos
The thermodynamic approach of analyzing structural stability of process plants was extended to construct the simple Hamiltonian model of lumped process systems. This type of model enables us to design a nonlinear PD feedback controller for passivation and loop shaping. This approach is applicable for lumped process systems where Kirchhoff convective transport takes place together with the transfer and sources of various types, and the manipulable input variables are the flow rates. Systems with constant mass holdup and uniform pressure in every balance volume satisfy these conditions. General results are shown by simple examples of practical importance: on a bilinear heat exchanger cell and on an isotherm CSTR with nonlinear reaction. [source]


IS CORPORATE R&D INVESTMENT IN HIGH-TECH SECTORS MORE EFFECTIVE?

CONTEMPORARY ECONOMIC POLICY, Issue 3 2010
RAQUEL ORTEGA-ARGILÉS
This paper discusses the link between R&D and productivity across the European industrial and service sectors. The empirical analysis is based on both the European sectoral OECD data and on a unique micro-longitudinal database consisting of 532 top European R&D investors. The main conclusions are as follows. First, the R&D stock has a significant positive impact on labor productivity; this general result is largely consistent with previous literature in terms of the sign, the significance, and the magnitude of the estimated coefficients. More interestingly, both at sectoral and firm levels the R&D coefficient increases monotonically (both in significance and magnitude) when we move from the low-tech to the medium- and high-tech sectors. This outcome means that corporate R&D investment is more effective in the high-tech sectors and this may need to be taken into account when designing policy instruments (subsidies, fiscal incentives, etc.) in support of private R&D. However, R&D investment is not the sole source of productivity gains; technological change embodied in gross investment is of comparable importance on aggregate and is the main determinant of productivity increase in the low-tech sectors. Hence, an economic policy aiming to increase productivity in the low-tech sectors should support overall capital formation. [source]


A General Formula for Valuing Defaultable Securities

ECONOMETRICA, Issue 5 2004
P. Collin-Dufresne
Previous research has shown that under a suitable no-jump condition, the price of a defaultable security is equal to its risk-neutral expected discounted cash flows if a modified discount rate is introduced to account for the possibility of default. Below, we generalize this result by demonstrating that one can always value defaultable claims using expected risk-adjusted discounting provided that the expectation is taken under a slightly modified probability measure. This new probability measure puts zero probability on paths where default occurs prior to the maturity, and is thus only absolutely continuous with respect to the risk-neutral probability measure. After establishing the general result and discussing its relation with the existing literature, we investigate several examples for which the no-jump condition fails. Each example illustrates the power of our general formula by providing simple analytic solutions for the prices of defaultable securities. [source]


Psychological assessment of malingering in psychogenic neurological disorders and non-psychogenic neurological disorders: relationship to psychopathology levels

EUROPEAN JOURNAL OF NEUROLOGY, Issue 10 2009
M. Van Beilen
Background and purpose:, It remains unknown whether psychological distress causes malingering in patients with psychogenic symptoms. Methods:, We studied 26 patients with psychogenic neurological disorders on psychopathology and malingering in comparison with 26 patients with various neurological conditions and 18 matched healthy controls (HC). Results:, Psychogenic patients showed the highest levels of psychological complaints and malingering, but non-psychogenic neurological patients also showed significantly more psychological distress and malingering compared with HC. Psychological distress was related to the degree of malingering, in both patient groups. Conclusion:, This data does not formally support a causal relationship between psychological distress and psychogenic neurological disorders, but suggests that a part of the psychological complaints is a general result of having an illness. The clinical implication of this study is that psychological distress is not sufficient for diagnosing functional complaints. Also, if a patient scores normal on a test for malingering, this does not mean that he or she is not suffering from psychogenic symptoms. [source]


Monetary Policy with Heterogeneous and Misspecified Expectations

JOURNAL OF MONEY, CREDIT AND BANKING, Issue 1 2009
MICHELE BERARDI
adaptive learning; expectations formation; heterogenous expectations; misspecifications; monetary policy In the recent literature on monetary policy and learning, it has been suggested that private sector's expectations should play a role in the policy rule implemented by the central bank, as they could improve the ability of the policymaker to stabilize the economy. Private sector's expectations, in these studies, are often taken to be homogeneous and rational, at least in the limit of a learning process. In this paper, instead, we consider the case in which private agents are heterogeneous in their expectations formation mechanisms and hold heterogeneous expectations in equilibrium. We investigate the impact of this heterogeneity in expectations on central bank's policy implementation and on the ensuing economic outcomes, and the general result that emerges is that the central bank should disregard inaccurate private sector expectations and solely base its policy on the accurate ones. [source]


DUALITY IN OPTIMAL INVESTMENT AND CONSUMPTION PROBLEMS WITH MARKET FRICTIONS

MATHEMATICAL FINANCE, Issue 2 2007
I. Klein
In the style of Rogers (2001), we give a unified method for finding the dual problem in a given model by stating the problem as an unconstrained Lagrangian problem. In a theoretical part we prove our main theorem, Theorem 3.1, which shows that under a number of conditions the value of the dual and primal problems is equal. The theoretical setting is sufficiently general to be applied to a large number of examples including models with transaction costs, such as Cvitanic and Karatzas (1996) (which could not be covered by the setting in Rogers [2001]). To apply the general result one has to verify the assumptions of Theorem 3.1 for each concrete example. We show how the method applies for two examples, first Cuoco and Liu (1992) and second Cvitanic and Karatzas (1996). [source]


Put Option Premiums and Coherent Risk Measures

MATHEMATICAL FINANCE, Issue 2 2002
Robert Jarrow
This note defines the premium of a put option on the firm as a measure of insolvency risk. The put premium is not a coherent risk measure as defined by Artzner et al. (1999). It satisfies all the axioms for a coherent risk measure except one, the translation invariance axiom. However, it satisfies a weakened version of the translation invariance axiom that we label translation monotonicity. The put premium risk measure generates an acceptance set that satisfies the regularity Axioms 2.1,2.4 of Artzner et al. (1999). In fact, this is a general result for any risk measure satisfying the same risk measure axioms as the put premium. Finally, the coherent risk measure generated by the put premium's acceptance set is the minimal capital required to protect the firm against insolvency uniformly across all states of nature. [source]


The Krein,von Neumann extension and its connection to an abstract buckling problem

MATHEMATISCHE NACHRICHTEN, Issue 2 2010
Mark S. Ashbaugh
Abstract We prove the unitary equivalence of the inverse of the Krein,von Neumann extension (on the orthogonal complement of its kernel) of a densely defined, closed, strictly positive operator, S , ,IH for some , > 0 in a Hilbert space H to an abstract buckling problem operator. In the concrete case where in L2(,; dnx) for , , ,n an open, bounded (and sufficiently regular) domain, this recovers, as a particular case of a general result due to G. Grubb, that the eigenvalue problem for the Krein Laplacian SK (i.e., the Krein,von Neumann extension of S), SKv = ,v, , , 0, is in one-to-one correspondence with the problem of the buckling of a clamped plate, (-,)2u = , (-,)u in ,, , , 0, u , H02(,), where u and v are related via the pair of formulas u = SF -1 (-,)v, v = , -1(-,)u, with SF the Friedrichs extension of S. This establishes the Krein extension as a natural object in elasticity theory (in analogy to the Friedrichs extension, which found natural applications in quantum mechanics, elasticity, etc.) (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Undecidable and decidable restrictions of Hilbert's Tenth Problem: images of polynomials vs. images of exponential functions

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 1 2006
Mihai Prunescu
Abstract Classical results of additive number theory lead to the undecidability of the existence of solutions for diophantine equations in given special sets of integers. Those sets which are images of polynomials are covered by a more general result in the second section. In contrast, restricting diophantine equations to images of exponential functions with natural bases leads to decidable problems, as proved in the third section. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


An analytic model for the epoch of halo creation

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2000
W. J. Percival
In this paper we describe the Bayesian link between the cosmological mass function and the distribution of times at which isolated haloes of a given mass exist. By assuming that clumps of dark matter undergo monotonic growth on the time-scales of interest, this distribution of times is also the distribution of ,creation' times of the haloes. This monotonic growth is an inevitable aspect of gravitational instability. The spherical top-hat collapse model is used to estimate the rate at which clumps of dark matter collapse. This gives the prior for the creation time given no information about halo mass. Applying Bayes' theorem then allows any mass function to be converted into a distribution of times at which haloes of a given mass are created. This general result covers both Gaussian and non-Gaussian models. We also demonstrate how the mass function and the creation time distribution can be combined to give a joint density function, and discuss the relation between the time distribution of major merger events and the formula calculated. Finally, we determine the creation time of haloes within three N -body simulations, and compare the link between the mass function and creation rate with the analytic theory. [source]


Dynamics of petroleum markets in OECD countries in a monthly VAR,VEC model (1995,2007)

OPEC ENERGY REVIEW, Issue 1 2008
Mehdi Asali
This paper contains some results of a study in which the dynamics of petroleum markets in the Organization for Economic Cooperation and Development (OECD) is investigated through a vector auto regression (VAR),vector error correction model. The time series of the model comprises the monthly data for the variables demand for oil in the OECD, WTI in real term as a benchmark oil price, industrial production in OECD as a proxy for income and commercial stocks of crude oil and oil products in OECD for the time period of January 1995 to September 2007. The detailed results of this empirical research are presented in different sections of the paper; nevertheless, the general result that emerges from this study could be summarised as follows: (i) there is convincing evidence of the series being non-stationary and integrated of order one I(1) with clear signs of co-integration relations between the series; (ii) the VAR system of the empirical study appears stable and restores its dynamics as usual, following a shock to the rate of changes of different variables of the model, taking between five and eight periods (months in our case); (iii) we find the lag length of 2 as being optimal for the estimated VAR model; (iv) significant impact of changes in the commercial crude and products' inventory level on oil price and on demand for oil is highlighted in our empirical study and in different formulations of the VAR model, indicating the importance of the changes in the stocks' level on oil market dynamics; and (v) income elasticity of deman for oil appears to be prominent and statistically significant in most estimated models of the VAR system in the long run, while price elasticity of demand for oil is found to be negligible and insignificant in the short run. However, while aggregate oil consumption does not appear to be very sensitive to the changes of oil prices (which is believed to be because of the so-called ,rebound effect' of oil (energy) efficiency in the macro level) in the macro level, the declining trend of oil intensity (oil used for production of unit value of goods and services), particularly when there is an upward trend in oil price, clearly indicates the channels through which persistent changes in oil prices could affect the demand for oil in OECD countries. [source]


Granularity in Relational Formalisms,With Application to Time and Space Representation

COMPUTATIONAL INTELLIGENCE, Issue 4 2001
Jérôme Euzenat
Temporal and spatial phenomena can be seen at a more or less precise granularity, depending on the kind of perceivable details. As a consequence, the relationship between two objects may differ depending on the granularity considered. When merging representations of different granularity, this may raise problems. This paper presents general rules of granularity conversion in relation algebras. Granularity is considered independently of the specific relation algebra, by investigating operators for converting a representation from one granularity to another and presenting six constraints that they must satisfy. The constraints are shown to be independent and consistent and general results about the existence of such operators are provided. The constraints are used to generate the unique pairs of operators for converting qualitative temporal relationships (upward and downward) from one granularity to another. Then two fundamental constructors (product and weakening) are presented: they permit the generation of new qualitative systems (e.g. space algebra) from existing ones. They are shown to preserve most of the properties of granularity conversion operators. [source]


Constraints from F and D supersymmetry breaking in general supergravity theories

FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 7-9 2008
M. Gomez-Reino
Abstract We study the conditions under which a generic supergravity model involving chiral and vector multiplets can admit vacua with spontaneously broken supersymmetry and realistic cosmological constant. We find that the existence of such viable vacua implies some constraints involving the curvature tensor of the scalar geometry and the charge and mass matrices of the vector fields, and also that the vector of F and D auxiliary fields defining the Goldstino direction is constrained to lie within a certain domain. We illustrate the relevance of these results through some examples and also discuss the implications of our general results on the dynamics of moduli fields in string models. This contribution is based on [1,3]. [source]


Change in the Concentration of Employment in Computer Services: Spatial Estimation at the U.S. Metro County Level

GROWTH AND CHANGE, Issue 1 2007
DONALD GRIMES
ABSTRACT This article models the concentration of computer services activity across the U.S. with factors that incorporate spatial relationships. Specifically, we enhance the standard home-area study with an analysis that allows conditions in neighboring counties to affect the concentration of employment in the home county. We use county-level data for metropolitan areas between 1990 and 1997. To measure change in employment concentration, we use the change in location quotients for SIC 737, which captures employment concentration changes caused by both the number of firms and the scale of their activity relative to the national average. After controlling for local demand for computer services, our results support the importance of the presence of a qualified labor supply, interindustry linkages, proximity to a major airport, and spatial processes in explaining changes in computer services employment concentration, finding little support for the influence of cost factors. Our enhanced model reveals interjurisdictional relationships among these metro counties that could not be captured with standard estimates by state, metropolitan statistical area (MSA), or county. Using counties within MSAs, therefore, provides more general results than case studies but still allows measurement of local interactions. [source]


On the stability and convergence of a Galerkin reduced order model (ROM) of compressible flow with solid wall and far-field boundary treatment,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2010
I. Kalashnikova
Abstract A reduced order model (ROM) based on the proper orthogonal decomposition (POD)/Galerkin projection method is proposed as an alternative discretization of the linearized compressible Euler equations. It is shown that the numerical stability of the ROM is intimately tied to the choice of inner product used to define the Galerkin projection. For the linearized compressible Euler equations, a symmetry transformation motivates the construction of a weighted L2 inner product that guarantees certain stability bounds satisfied by the ROM. Sufficient conditions for well-posedness and stability of the present Galerkin projection method applied to a general linear hyperbolic initial boundary value problem (IBVP) are stated and proven. Well-posed and stable far-field and solid wall boundary conditions are formulated for the linearized compressible Euler ROM using these more general results. A convergence analysis employing a stable penalty-like formulation of the boundary conditions reveals that the ROM solution converges to the exact solution with refinement of both the numerical solution used to generate the ROM and of the POD basis. An a priori error estimate for the computed ROM solution is derived, and examined using a numerical test case. Published in 2010 by John Wiley & Sons, Ltd. [source]


A generalized homogeneous domination approach for global stabilization of inherently nonlinear systems via output feedback

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2007
Jason Polendo
Abstract In this paper, we introduce a generalized framework for global output feedback stabilization of a class of uncertain, inherently nonlinear systems of a particularly complex nature since their linearization around the equilibrium is not guaranteed to be either controllable or observable. Based on a new observer/controller construction and a homogeneous domination design, this framework not only unifies the existing output feedback stabilization results, but also leads to more general results which have been never achieved before, establishing this methodology as a universal tool for the global output feedback stabilization of inherently nonlinear systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Approximation algorithms for combinatorial multicriteria optimization problems

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2000
M. Ehrgott
Abstract The computational complexity of combinatorial multiple objective programming problems is investigated. NP-completeness and #P -completeness results are presented. Using two definitions of approximability, general results are presented, which outline limits for approximation algorithms. The performance of the well-known tree and Christofides' heuristics for the traveling salesman problem is investigated in the multicriteria case with respect to the two definitions of approximability. [source]


Least-squares Estimation of an Unknown Number of Shifts in a Time Series

JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2000
Marc Lavielle
In this contribution, general results on the off-line least-squares estimate of changes in the mean of a random process are presented. First, a generalisation of the Hajek-Renyi inequality, dealing with the fluctuations of the normalized partial sums, is given. This preliminary result is then used to derive the consistency and the rate of convergence of the change-points estimate, in the situation where the number of changes is known. Strong consistency is obtained under some mixing conditions. The limiting distribution is also computed under an invariance principle. The case where the number of changes is unknown is then addressed. All these results apply to a large class of dependent processes, including strongly mixing and also long-range dependent processes. [source]


THE DEPENDENCE STRUCTURE OF RUNNING MAXIMA AND MINIMA: RESULTS AND OPTION PRICING APPLICATIONS

MATHEMATICAL FINANCE, Issue 1 2010
Umberto Cherubini
We provide general results for the dependence structure of running maxima (minima) of sets of variables in a model based on (i) Markov dynamics; (ii) no Granger causality; (iii) cross-section dependence. At the time series level, we derive recursive formulas for running minima and maxima. These formulas enable to use a "bootstrapping" technique to recursively recover the pricing kernels of barrier options from those of the corresponding European options. We also show that the dependence formulas for running maxima (minima) are completely defined from the copula function representing dependence among levels at the terminal date. The result is applied to multivariate discrete barrier digital products. Barrier Altiplanos are simply priced by (i) bootstrapping the price of univariate barrier products; (ii) evaluating a European Altiplano with these values. [source]


BEHAVIORAL PORTFOLIO SELECTION IN CONTINUOUS TIME

MATHEMATICAL FINANCE, Issue 3 2008
Hanqing Jin
This paper formulates and studies a general continuous-time behavioral portfolio selection model under Kahneman and Tversky's (cumulative) prospect theory, featuring S-shaped utility (value) functions and probability distortions. Unlike the conventional expected utility maximization model, such a behavioral model could be easily mis-formulated (a.k.a. ill-posed) if its different components do not coordinate well with each other. Certain classes of an ill-posed model are identified. A systematic approach, which is fundamentally different from the ones employed for the utility model, is developed to solve a well-posed model, assuming a complete market and general Itô processes for asset prices. The optimal terminal wealth positions, derived in fairly explicit forms, possess surprisingly simple structure reminiscent of a gambling policy betting on a good state of the world while accepting a fixed, known loss in case of a bad one. An example with a two-piece CRRA utility is presented to illustrate the general results obtained, and is solved completely for all admissible parameters. The effect of the behavioral criterion on the risky allocations is finally discussed. [source]


STOCHASTIC HYPERBOLIC DYNAMICS FOR INFINITE-DIMENSIONAL FORWARD RATES AND OPTION PRICING

MATHEMATICAL FINANCE, Issue 1 2005
Shin Ichi Aihara
We model the term-structure modeling of interest rates by considering the forward rate as the solution of a stochastic hyperbolic partial differential equation. First, we study the arbitrage-free model of the term structure and explore the completeness of the market. We then derive results for the pricing of general contingent claims. Finally we obtain an explicit formula for a forward rate cap in the Gaussian framework from the general results. [source]


The play operator on the rectifiable curves in a Hilbert space

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 11 2008
Vincenzo Recupero
Abstract The vector play operator is the solution operator of a class of evolution variational inequalities arising in continuum mechanics. For regular data, the existence of solutions is easily obtained from general results on maximal monotone operators. If the datum is a continuous function of bounded variation, then the existence of a weak solution is usually proved by means of a time discretization procedure. In this paper we give a short proof of the existence of the play operator on rectifiable curves making use of basic facts of measure theory. We also drop the separability assumptions usually made by other authors. Copyright © 2007 John Wiley & Sons, Ltd. [source]


On first order congruences of lines in ,4 with irreducible fundamental surface

MATHEMATISCHE NACHRICHTEN, Issue 4 2005
Pietro De Poi
Abstract In this article we study congruences of lines in ,n, and in particular of order one. After giving general results, we obtain a complete classification in the case of ,4 in which the fundamental surface F is in fact a variety, i.e. it is integral, and the congruence is the irreducible set of the trisecant lines of F. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Exact boundary controllability of unsteady flows in a network of open canals

MATHEMATISCHE NACHRICHTEN, Issue 3 2005
Tatsien Li
Abstract By means of the general results on the exact boundary controllability for quasilinear hyperbolic systems, the author establishes the exact boundary controllability of unsteady flows in both a single open canal and a network of open canals with star configuration respectively. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Generic cuts in models of arithmetic

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 2 2008
Richard Kaye
Abstract We present some general results concerning the topological space of cuts of a countable model of arithmetic given by a particular indicator Y. The notion of "indicator" is de.ned in a novel way, without initially specifying what property is indicated and is used to de.ne a topological space of cuts of the model. Various familiar properties of cuts (strength, regularity, saturation, coding properties) are investigated in this sense, and several results are given stating whether or not the set of cuts having the property is comeagre. A new notion of "generic cut" is introduced and investigated and it is shown in the case of countable arithmetically saturated models M , PA that generic cuts exist, indeed the set of generic cuts is comeagre in the sense of Baire, and furthermore that two generic cuts within the same "small interval" of the model are conjugate by an automorphism of the model. The paper concludes by outlining some applications to constructions of cuts satisfying properties incompatible with genericity, and discussing in model-theoretic terms those properties for which there is an indicator Y. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


CONSTANT EFFORT AND CONSTANT QUOTAFISHING POLICIES WITH CUT-OFFS IN A RANDOM ENVIRONMENT

NATURAL RESOURCE MODELING, Issue 2 2001
CARLOS A. BRAUMANN
ABSTRACT. Consider a population subjected to constant effort or constant quota fishing with a generaldensity-dependence population growth function (because that function is poorly known). Consider environmental random fluctuations that either affect an intrinsic growth parameter or birth/death rates, thus resulting in two stochastic differential equations models. From previous results of ours, we obtain conditions for non-extinction and for existence of a population size stationary density. Constant quota (which always leads to extinction in random environments) and constant effort policies are studied; they are hard to implement for extreme population sizes. Introducing cut-offs circumvents these drawbacks. In a deterministic environment, for a wide range of values, cutting-off does not affect the steady-state yield. This is not so in a random environment and we will give expressions showing how steady-state average yield and population size distribution vary as functions of cut-off choices. We illustrate these general results with function plots for the particular case of logistic growth. [source]


The application of eigensymmetries of face forms to anomalous scattering and twinning by merohedry in X-ray diffraction

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 3 2010
H. Klapper
The face form (crystal form) {hkl} which corresponds to an X-ray reflection hkl is considered. The eigensymmetry (inherent symmetry) of such a face form can be used to derive general results on the intensities of the corresponding X-ray reflections. Two cases are treated. (i) Non-centrosymmetric crystals exhibiting anomalous scattering: determination of reflections hkl for which Friedel's rule is strictly valid, i.e.I(hkl) = I() (Friedel pair, centric reflection), or violated, i.e.I(hkl) , I() (Bijvoet pair, acentric reflection). It is shown that those reflections hkl strictly obey Friedel's rule, for which the corresponding face form {hkl} is centrosymmetric. If the face form {hkl} is non-centrosymmetric, Friedel's rule is violated due to anomalous scattering. (ii) Crystals twinned by merohedry: determination of reflections hkl, the intensities of which are affected (or not affected) by the twinning. It is shown that the intensity is affected if the twin element is not a symmetry element of the eigensymmetry of the corresponding face form {hkl}. The intensity is not affected if the twin element belongs to the eigensymmetry of {hkl} (`affected' means that the intensities of the twin-related reflections are different for different twin domain states owing to differences either in geometric structure factors or in anomalous scattering or in both). A simple procedure is presented for the determination of these types of reflections from Tables 10.1.2.2 and 10.1.2.3 of International Tables for Crystallography, Vol. A [Hahn & Klapper (2002). International Tables for Crystallography, Vol. A, Part 10, edited by Th. Hahn, 5th ed. Dordrecht: Kluwer]. The application to crystal-structure determination of crystals twinned by merohedry (reciprocal space) and to X-ray diffraction topographic mapping of twin domains (direct space) is discussed. Relevant data and twinning relations for the 63 possible twin laws by merohedry in the 26 merohedral point groups are presented in Appendices A to D. [source]


General temperature dependence of solar cell performance and implications for device modelling

PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 5 2003
Martin A. Green
Solar cell performance generally decreases with increasing temperature, fundamentally owing to increased internal carrier recombination rates, caused by increased carrier concentrations. The temperature dependence of a general solar cell is investigated on the basis of internal device physics, producing general results for the temperature dependence of open-circuit voltage and short-circuit current, as well as recommendations for generic modelling. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Design of experiments with unknown parameters in variance

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2002
Valerii V. Fedorov
Abstract Model fitting when the variance function depends on unknown parameters is a popular problem in many areas of research. Iterated estimators which are asymptotically equivalent to maximum likelihood estimators are proposed and their convergence is discussed. From a computational point of view, these estimators are very close to the iteratively reweighted least-squares methods. The additive structure of the corresponding information matrices allows us to apply convex design theory which leads to optimal design algorithms. We conclude with examples which illustrate how to bridge our general results with specific applied needs. In particular, a model with experimental costs is introduced and is studied within the normalized design paradigm. Copyright © 2002 John Wiley & Sons, Ltd. [source]