Home About us Contact | |||
Functional Form (functional + form)
Kinds of Functional Form Selected AbstractsAggregation Over Firms and Flexible Functional Forms,THE ECONOMIC RECORD, Issue 252 2005H. YOUN KIM The present paper presents a generalised class of cost functions suitable for aggregation of firms and considers various flexible functional forms to assess whether they possess desirable aggregation properties. This set of cost functions includes two output functions and subsumes linear and non-linear cost functions employed in existing analyses. While the quadratic and the generalised Leontief and McFadden cost functions satisfy the generalised aggregation condition, the translog cost function and its variants are less capable of possessing desirable aggregation properties. A modified quasi-homothetic translog form is presented that is useful for aggregate analysis. Possible extensions of the generalised aggregation rule are discussed. [source] Spatial dependence in agricultural land prices: does it exist?AGRICULTURAL ECONOMICS, Issue 3 2009Philip Kostov Spatial dependence; Hedonic models; Functional form Abstract Trade-offs arise between spatial dependence and choice of functional form in agricultural land price hedonic models. We discuss these trade-offs and how they can create spurious spatial dependence. Using a land sales dataset with apparent spatial dependence, we implement a semiparametric approach avoiding potential problems with the functional form. The results show that in addition to being nonlinear, the impacts are also characterized by significance thresholds that are difficult to capture in a parametric model. More importantly, we fail to detect any spatial dependence demonstrating that inappropriate functional form can indeed be responsible for finding spatial dependence in hedonic models. [source] Freshwater invasions: using historical data to analyse spreadDIVERSITY AND DISTRIBUTIONS, Issue 1 2007Sarina E. Loo ABSTRACT Aquatic invasive species cause deleterious environmental and economic impacts, and are rapidly spreading through ecosystems worldwide. Despite this, very few data sets exist that describe both the presence and the absence of invaders over long time periods. We have used Geographical Information Systems (GIS) to analyse time-series data describing the spread of the freshwater invasive New Zealand mudsnail, Potamopyrgus antipodarum, in Victoria, Australia, over 110 years. We have mapped the snail's spread, estimated the percentage of stream length invaded through time, calculated the functional form of the spread rate, and investigated the role that the two proposed vectors , fish stocking and angling , have had in this invasion. Since it was first found in 1895, P. antipodarum has expanded its range in Victoria and now occurs throughout much of the southern and central areas of the state. The north of the state is relatively less invaded than the south, with the division corresponding approximately to the presence of the Great Dividing Range. We show that the snail's range has been increasing at an approximately exponential rate and estimate that 20% of total Victorian stream length is currently invaded. We also show that using long-term data can change the outcome of analyses of the relationship between vectors of spread and invasion status of separate catchments. When our time-series data were aggregated through time, the total numbers of fish stocking events and angling activity were both correlated with invasion. However, when the time-series data were used and the number of fish stocking events calculated up until the date of invasion, no relationships with stocking were found. These results underline the role that time-series data, based on both presences and absences, have to play when investigating the spread of invasive species. [source] Structural Equations, Treatment Effects, and Econometric Policy Evaluation1ECONOMETRICA, Issue 3 2005James J. Heckman This paper uses the marginal treatment effect (MTE) to unify the nonparametric literature on treatment effects with the econometric literature on structural estimation using a nonparametric analog of a policy invariant parameter; to generate a variety of treatment effects from a common semiparametric functional form; to organize the literature on alternative estimators; and to explore what policy questions commonly used estimators in the treatment effect literature answer. A fundamental asymmetry intrinsic to the method of instrumental variables (IV) is noted. Recent advances in IV estimation allow for heterogeneity in responses but not in choices, and the method breaks down when both choice and response equations are heterogeneous in a general way. [source] A Parametric Approach to Flexible Nonlinear InferenceECONOMETRICA, Issue 3 2001James D. Hamilton This paper proposes a new framework for determining whether a given relationship is nonlinear, what the nonlinearity looks like, and whether it is adequately described by a particular parametric model. The paper studies a regression or forecasting model of the form yt=,(xt)+,t where the functional form of ,(,) is unknown. We propose viewing ,(,) itself as the outcome of a random process. The paper introduces a new stationary random field m(,) that generalizes finite-differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for ,(,). We view the parameters that characterize the relation between a given realization of m(,) and the particular value of ,(,) for a given sample as population parameters to be estimated by maximum likelihood or Bayesian methods. We show that the resulting inference about the functional relation also yields consistent estimates for a broad class of deterministic functions ,(,). The paper further develops a new test of the null hypothesis of linearity based on the Lagrange multiplier principle and small-sample confidence intervals based on numerical Bayesian methods. An empirical application suggests that properly accounting for the nonlinearity of the inflation-unemployment trade-off may explain the previously reported uneven empirical success of the Phillips Curve. [source] Evaluation of detergents for the soluble expression of ,-helical and ,-barrel-type integral membrane proteins by a preparative scale individual cell-free expression systemFEBS JOURNAL, Issue 23 2005Christian Klammt Cell-free expression has become a highly promising tool for the fast and efficient production of integral membrane proteins. The proteins can be produced as precipitates that solubilize in mild detergents usually without any prior denaturation sttif. Alternatively, membrane proteins can be synthesized in a soluble form by adding detergents to the cell-free system. However, the effects of a representative variety of detergents on the production, solubility and activity of a wider range of membrane proteins upon cell-free expression are currently unknown. We therefore analyzed the cell-free expression of three structurally very different membrane proteins, namely the bacterial ,-helical multidrug transporter, EmrE, the ,-barrel nucleoside transporter, Tsx, and the porcine vasopressin receptor of the eukaryotic superfamily of G-protein coupled receptors. All three membrane proteins could be produced in amounts of several mg per one ml of reaction mixture. In general, the detergent 1-myristoyl-2-hydroxy- sn -glycero-3-[phospho- rac -(1-glycerol)] was found to be most effective for the resolubilization of membrane protein precipitates, while long chain polyoxyethylene-alkyl-ethers proved to be most suitable for the soluble expression of all three types of membrane proteins. The yield of soluble expressed membrane protein remained relatively stable above a certain threshold concentration of the detergents. We report, for the first time, the high-level cell-free expression of a ,-barrel type membrane protein in a functional form. Structural and functional variations of the analyzed membrane proteins are evident that correspond with the mode of expression and that depend on the supplied detergent. [source] Autophosphorylation of Archaeoglobus fulgidus Rio2 and crystal structures of its nucleotide,metal ion complexesFEBS JOURNAL, Issue 11 2005Nicole LaRonde-LeBlanc The highly conserved, atypical RIO serine protein kinases are found in all organisms, from archaea to man. In yeast, the kinase activity of Rio2 is necessary for the final processing step of maturing the 18S ribosomal rRNA. We have previously shown that the Rio2 protein from Archaeoglobus fulgidus contains both a small kinase domain and an N-terminal winged helix domain. Previously solved structures using crystals soaked in nucleotides and Mg2+ or Mn2+ showed bound nucleotide but no ordered metal ions, leading us to the conclusion that they did not represent an active conformation of the enzyme. To determine the functional form of Rio2, we crystallized it after incubation with ATP or ADP and Mn2+. Co-crystal structures of Rio2,ATP,Mn and Rio2,ADP,Mn were solved at 1.84 and 1.75 Å resolution, respectively. The ,-phosphate of ATP is firmly positioned in a manner clearly distinct from its location in canonical serine kinases. Comparison of the Rio2,ATP,Mn complex with the Rio2 structure with no added nucleotides and with the ADP complex indicates that a flexible portion of the Rio2 molecule becomes ordered through direct interaction between His126 and the ,-phosphate oxygen of ATP. Phosphopeptide mapping of the autophosphorylation site of Rio2 identified Ser128, within the flexible loop and directly adjacent to the part that becomes ordered in response to ATP, as the target. These results give us further information about the nature of the active site of Rio2 kinase and suggest a mechanism of regulation of its enzymatic activity. [source] Effects of ownership, subsidization and teaching activities on hospital costs in SwitzerlandHEALTH ECONOMICS, Issue 3 2008Mehdi Farsi Abstract This paper explores the cost structure of Swiss hospitals, focusing on differences due to teaching activities and those related to ownership and subsidization types. A stochastic total cost frontier with a Cobb,Douglas functional form has been estimated for a panel of 148 general hospitals over the six-year period from 1998 to 2003. Inpatient cases adjusted by DRG cost weights and ambulatory revenues are considered as two separate outputs. The adopted econometric specification allows for unobserved heterogeneity across hospitals. The results suggest that teaching activities are an important cost-driving factor and hospitals that have a broader range of specialization are relatively more costly. The excess costs of university hospitals can be explained by more extensive teaching activities as well as the relative complexity of the offered medical treatments from a teaching point of view. However, even after controlling for such differences university hospitals have shown a relatively low cost-efficiency especially in the first two or three years of the sample period. The analysis does not provide any evidence of significant efficiency differences across ownership/subsidy categories. Copyright © 2007 John Wiley & Sons, Ltd. [source] How much confidence should we place in efficiency estimates?HEALTH ECONOMICS, Issue 11 2003Andrew StreetArticle first published online: 3 DEC 200 Abstract Ordinary least squares (OLS) and stochastic frontier (SF) analyses are commonly used to estimate industry-level and firm-specific efficiency. Using cross-sectional data for English public hospitals, a total cost function based on a specification developed by the English Department of Health is estimated. Confidence intervals are calculated around the OLS residuals and around the inefficiency component of the SF residuals. Sensitivity analysis is conducted to assess whether conclusions about relative performance are robust to choices of error distribution, functional form and model specification. It is concluded that estimates of relative hospital efficiency are sensitive to estimation decisions and that little confidence can be placed in the point estimates for individual hospitals. The use of these techniques to set annual performance targets should be avoided. Copyright © 2002 John Wiley & Sons, Ltd. [source] Does Prospective Payment Really Contain Nursing Home Costs?HEALTH SERVICES RESEARCH, Issue 2 2002Li-Wu Chen Objective. To examine whether nursing homes would behave more efficiently, without compromising their quality of care, under prospective payment. Data Sources. Four data sets for 1994: the Skilled Nursing Facility Minimum Data Set, the Online Survey Certification and Reporting System file, the Area Resource File, and the Hospital Wage Indices File. A national sample of 4,635 nursing homes is included in the analysis. Study Design. Using a modified hybrid functional form to estimate nursing home costs, we distinguish our study from previous research by controlling for quality differences (related to both care and life) and addressing the issues of output and quality endogeneity, as well as using more recent national data. Factor analysis was used to operationalize quality variables. To address the endogeneity problems, instrumental measures were created for nursing home output and quality variables. Principal Findings. Nursing homes in states using prospective payment systems do not have lower costs than their counterpart facilities under retrospective cost-based payment systems, after quality differences among facilities are controlled for and the endogeneity problem of quality variables is addressed. Conclusions. The effects of prospective payment on nursing home cost reduction may be through quality cuts, rather than cost efficiency. If nursing home payments under prospective payment systems are not adjusted for quality, nursing homes may respond by cutting their quality levels, rather than controlling costs. Future outcomes research may provide useful insights into the adjustment of quality in the design of prospective payment for nursing home care. [source] Numerical modelling of fluid flow in microscopic images of granular materialsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2002E. Masad Abstract A program for the simulation of two-dimensional (2-D) fluid flow at the microstructural level of a saturated anisotropic granular medium is presented. The program provides a numerical solution to the complete set of Navier,Stokes equations without a priori assumptions on the viscous or convection components. This is especially suited for the simulation of the flow of fluids with different density and viscosity values and for a wide range of granular material porosity. The analytical solution for fluid flow in a simple microstructure of porous medium is used to verify the computer program. Subsequently, the flow field is computed within microscopic images of granular material that differ in porosity, particle size and particle shape. The computed flow fields are shown to follow certain paths depending on air void size and connectivity. The permeability tensor coefficients are derived from the flow fields, and their values are shown to compare well with laboratory experimental data on glass beads, Ottawa sand and silica sands. The directional distribution of permeability is expressed in a functional form and its anisotropy is quantified. Permeability anisotropy is found to be more pronounced in the silica sand medium that consists of elongated particles. Copyright © 2001 John Wiley & Sons, Ltd. [source] A dynamic approach for evaluating parameters in a numerical methodINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2005A. A. Oberai Abstract A new methodology for evaluating unknown parameters in a numerical method for solving a partial differential equation is developed. The main result is the identification of a functional form for the parameters which is derived by requiring the numerical method to yield ,optimal' solutions over a set of finite-dimensional function spaces. The functional depends upon the numerical solution, the forcing function, the set of function spaces, and the definition of the optimal solution. It does not require exact or approximate analytical solutions of the continuous problem, and is derived from an extension of the variational Germano identity. This methodology is applied to the one-dimensional, linear advection,diffusion problem to yield a non-linear dynamic diffusivity method. It is found that this method yields results that are commensurate to the SUPG method. The same methodology is then used to evaluate the Smagorinsky eddy viscosity for the large eddy simulation of the decay of homogeneous isotropic turbulence in three dimensions. In this case the resulting method is found to be more accurate than the constant-coefficient and the traditional dynamic versions of the Smagorinsky model. Copyright © 2004 John Wiley & Sons, Ltd. [source] A novel compact piecewise-linear representation,INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 1 2005Chengtao Wen Abstract A new compact MAX representation for 2-D continuous piecewise-linear (PWL) functions is developed in this paper. The representation is promising since it can be easily generalized into higher dimensions. We also establish the explicit functional form of basis function and demonstrate that the proposed basis function is the elementary ,building block' from which a fully general 2-D PWL function can be constructed. In addition, we reveal the relationship of basis function with minimal degenerate intersection and Hinging Hyperplane, which shows that the MAX model can unify Chua's canonical expression, Li's representation, lattice PWL function and Bremann's Hinging Finding Algorithm into one common theoretical framework. Copyright © 2005 John Wiley & Sons, Ltd. [source] Simple radiative models for surface warming and upper-troposphere coolingINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 11 2009P. N. Keating Abstract A simple model of greenhouse-gas radiative processes intended to make the surface-warming effect of water-vapour and CO2 absorption more readily understandable leads to a conclusion that the greenhouse gases also cool the upper troposphere. The results from the simple model are compared with experimental observations, and a functional form for the decline of vertical convection and water-vapour radiation near the tropopause is derived from previously unexplained high-altitude cooling-trend data. A possible reason why global climate models do not show the observed upper-troposphere cooling trend is tentatively suggested. Copyright © 2008 Royal Meteorological Society [source] A novel blind super-resolution technique based on the improved Poisson maximum a posteriori algorithmINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2002Min-Cheng Pan Abstract Image restoration has received considerable attention. In many practical situations, unfortunately, the blur is often unknown, and little information is available about the true image. Therefore, the true image is identified directly from the corrupted image by using partial or no information about the blurring process and the true image. In addition, noise will be amplified to induce severely ringing artifacts in the process of restoration. This article proposes a novel technique for the blind super-resolution, whose mechanism alternates between de-convolution of the image and the point spread function based on the improved Poisson maximum a posteriori super-resolution algorithm. This improved Poisson MAP super-resolution algorithm incorporates the functional form of a Wiener filter into the Poisson MAP algorithm operating on the edge image further to reduce noise effects and speed restoration. Compared with that based on the Poisson MAP, the novel blind super-resolution technique presents experimental results from 1-D signals and 2-D images corrupted by Gaussian point spread functions and additive noise with significant improvements in quality. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 12, 239,246, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10032 [source] Complete basis set extrapolations of dispersion, exchange, and coupled-clusters contributions to the interaction energy: a helium dimer study,INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 12 2008gorzata Jeziorska Abstract Effectiveness of various extrapolation schemes in predicting complete basis set (CBS) values of interaction energies has been investigated for the helium dimer as a function of interatomic separation R. The investigations were performed separately for the leading dispersion and exchange contributions to the interaction energy and for the interaction energy computed using the coupled cluster method with single and double excitations (CCSD). For all these contributions, practically exact reference values were obtained from Gaussian-type geminal calculations. Sequences of orbital basis sets augmented with diffuse and bond functions or augmented with two sets of diffuse functions have been employed, with the cardinal numbers up to X = 7. The functional form EX = ECBS + A(X , k),, was applied for the extrapolations, where EX is the contribution to the interaction energy computed with a basis set of cardinal number X. The main conclusion of this work is that CBS extrapolations of an appropriate functional form generally improve the accuracy of the interaction energies at a very small additional computational cost (of the order of 10%) and should be recommended in calculations of interatomic and intermolecular potentials. The effectiveness of the extrapolations significantly depends, however, on the interatomic separation R and on the composition of the basis set. Basis sets with midbond functions, well known to provide at a given size much more accurate nonextrapolated results than bases lacking such functions, have been found to perform best also in extrapolations. The X,1 extrapolations of dispersion energies computed with midbond function turned out to be very efficient (except at large R), reducing the errors by an order of magnitude for small X and a factor of two for large X (where the errors of nonextrapolated results are already very small). If midbond functions are not used, the X,3 formula is most appropriate for the dispersion energies. For the exchange component of the interaction energy, the best results are obtained,in both types of basis sets,with the X,4 extrapolation, which leads (in both cases) to almost an order of magnitude reduction of the error. The X,3 and (X , 1),3 extrapolations work also well, but give smaller improvements. The correlation component of the CCSD interaction energy extrapolates best with , between 2 and 3 for bases with midbond functions and between 3 and 4 for bases without such functions. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source] Estimating Long-term Trends in Tropospheric Ozone LevelsINTERNATIONAL STATISTICAL REVIEW, Issue 1 2002Michael Smith Summary This paper develops Bayesian methodology for estimating long-term trends in the daily maxima of tropospheric ozone. The methods are then applied to study long-term trends in ozone at six monitoring sites in the state of Texas. The methodology controls for the effects of meteorological variables because it is known that variables such as temperature, wind speed and humidity substantially affect the formation of tropospheric ozone. A semiparametric regression model is estimated in which a nonparametric trivariate surface is used to model the relationship between ozone and these meteorological variables because, while it is known that the relatinship is a complex nonlinear one, its functional form is unknown. The model also allows for the effects of wind direction and seasonality. The errors are modeled as an autoregression, which is methodologically challenging because the observations are unequally spaced over time. Each function in the model is represented as a linear combination of basis functions located at all of the design points. We also estimate an appropriate data transformation simulataneously with the functions. The functions are estimated nonparametrically by a Bayesian hierarchical model that uses indicator variables to allow a non-zero probability that the coefficient of each basis term is zero. The entire model, including the nonparametric surfaces, data transformation and autoregression for the unequally spaced errors, is estimated using a Markov chain Monte Carlo sampling scheme with a computationally efficient transition kernel for generating the indicator variables. The empirical results indicate that key meteorological variables explain most of the variation in daily ozone maxima through a nonlinear interaction and that their effects are consistent across the six sites. However, the estimated trends vary considerably from site to site, even within the same city. [source] Do Stock Prices Fully Reflect the Implications of Special Items for Future Earnings?JOURNAL OF ACCOUNTING RESEARCH, Issue 3 2002David Burgstahler Previous research (Rendleman, Jones, and Latane [1987]; Freeman and Tse [1989]; Bernard and Thomas [1990]; and Ball and Bartov [1996]) indicates that security prices do not fully reflect predictable elements of the relation between current and future quarterly earnings. We investigate whether this finding also holds for the special items component of earnings. Given that special items are prominent in financial analysis and are assumed to have relatively straightforward implications for future earnings (special items are assumed to be largely transitory), one might expect that prices would fully impound the implications of special items for future earnings. Based on the "two-equation" approach used in Ball and Bartov [1996] and other studies (e.g., Abarbanell and Bernard [1992]; Sloan [1996]; Rangan and Sloan [1998]; and Soffer and Lys [1999]), we find that while prices reflect relatively more of the effects of special items compared to other earnings components, we still reject the null hypothesis that prices fully impound the implications of special items for future earnings. The "two-equation" approach assesses the consistency of coefficients in a pair of prediction and pricing equations, and thus depends on an assumed functional form. However, a less structured abnormal returns methodology like that used in Bernard and Thomas [1990] also supports the conclusion that the implications of special items are not fully impounded in prices. Specifically, a trading strategy based only on the sign of special items earns small but statistically significant abnormal returns during a 3-day window four quarters subsequent to the original announcement of special items. [source] Land of addicts? an empirical investigation of habit-based asset pricing modelsJOURNAL OF APPLIED ECONOMETRICS, Issue 7 2009Xiaohong Chen This paper studies the ability of a general class of habit-based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. We treat the functional form of the habit as unknown, and estimate it along with the rest of the model's finite dimensional parameters. Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated habit function is nonlinear, that habit formation is better described as internal rather than external, and the estimated time-preference parameter and the power utility parameter are sensible. In addition, the estimated habit function generates a positive stochastic discount factor (SDF) proxy and performs well in explaining cross-sectional stock return data. We find that an internal habit SDF proxy can explain a cross-section of size and book-market sorted portfolio equity returns better than (i) the Fama and French (1993) three-factor model, (ii) the Lettau and Ludvigson (2001b) scaled consumption CAPM model, (iii) an external habit SDF proxy, (iv) the classic CAPM, and (v) the classic consumption CAPM. Copyright © 2009 John Wiley & Sons, Ltd. [source] Food expenditure patterns of the Hispanic population in the United StatesAGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 2 2002Bruno A. Lanfranco Food expenditure patterns were analyzed for Hispanic households in the United States. Engel curves for three food categories,total food (TF), food eaten at home (FAH), and food eaten away from home (FAFH),were estimated using a semilogarithmic functional form. The models for TF and FAH were estimated by OLS, using heteroscedasticity consistent estimators. The equation for FAFH was estimated using a two-part model, with the level equation estimated by least squares with corrections for heteroscedasticity, using only the observations for which a positive amount of expenditures on FAFH was reported. The estimated income elasticity of demand for food for Hispanic households were 0.29 for TF, 0.21 for FAH, and 0.49 for FAFH. Household size elasticities were 0.32, 0.40, and 0.07, respectively. Our analysis indicates that Hispanic households devoted a higher proportion of their budget to FAH, 25.8%, than the average American household, while the proportion spent on FAFH was only 3.6%.[EconLit citations: L610.] © 2002 Wiley Periodicals, Inc. [source] An assessment of the EU growth forecasts under asymmetric preferencesJOURNAL OF FORECASTING, Issue 6 2008George A. Christodoulakis Abstract EU Commission forecasts are used as a benchmark within the framework of the Stability and Growth Pact, aimed at providing a prudential view of economic outlook, especially for member states in an Excessive Deficit Procedure. Following Elliott et al. (2005), we assess whether there exist asymmetries in the loss preference of the Commission's GDP growth forecasts from 1969 to 2004. Our empirical evidence is robust across information sets and reveals that the loss preferences tend to show some variation in terms of asymmetry across member states. Given certain conditions concerning the time horizon of forecasts and the functional form of the loss preferences, the evidence further reveals that the Commission forecasting exercise could be subject to caveats. Copyright © 2008 John Wiley & Sons, Ltd. [source] Spatial dependence in agricultural land prices: does it exist?AGRICULTURAL ECONOMICS, Issue 3 2009Philip Kostov Spatial dependence; Hedonic models; Functional form Abstract Trade-offs arise between spatial dependence and choice of functional form in agricultural land price hedonic models. We discuss these trade-offs and how they can create spurious spatial dependence. Using a land sales dataset with apparent spatial dependence, we implement a semiparametric approach avoiding potential problems with the functional form. The results show that in addition to being nonlinear, the impacts are also characterized by significance thresholds that are difficult to capture in a parametric model. More importantly, we fail to detect any spatial dependence demonstrating that inappropriate functional form can indeed be responsible for finding spatial dependence in hedonic models. [source] Mitochondrial localization of DJ-1 leads to enhanced neuroprotectionJOURNAL OF NEUROSCIENCE RESEARCH, Issue 1 2009Eunsung Junn Abstract Mutations in DJ-1 (PARK7) cause recessively inherited Parkinson's disease. DJ-1 is a multifunctional protein with antioxidant and transcription modulatory activity. Its localization in cytoplasm, mitochondria, and nucleus is recognized, but the relevance of this subcellular compartmentalization to its cytoprotective activity is not fully understood. Here we report that under basal conditions DJ-1 is present mostly in the cytoplasm and to a lesser extent in mitochondria and nucleus of dopaminergic neuroblastoma SK-N-BE(2)C cells. Upon oxidant challenge, more DJ-1 translocates to mitochondria within 3 hr and subsequently to the nucleus by 12 hr. The predominant DJ-1 species in both mitochondria and nucleus is a dimer believed to be the functional form. Mutating cysteine 106, 53, or 46 had no impact on the translocation of DJ-1 to mitochondria. To study the relative neuroprotective activity of DJ-1 in mitochondria and nucleus, DJ-1 cDNA constructs fused to the appropriate localization signal were transfected into cells. Compared with 30% protection against oxidant-induced cell death in wild-type DJ-1-transfected cells, mitochondrial targeting of DJ-1 provided a significantly stronger (55%) cytoprotection based on lactate dehydrogenase release. Nuclear targeting of DJ-1 preserved cells equally as well as the wild-type protein. These observations suggest that the time frame for the translocation of DJ-1 from the cytoplasm to mitochondria and to the nucleus following oxidative stress is quite different and that dimerized DJ-1 in mitochondria is functional as an antioxidant not related to cysteine modification. These findings further highlight the multifaceted functions of DJ-1 as a cytoprotector in different cellular compartments. © 2008 Wiley-Liss, Inc. [source] General linearized biexponential model for QSAR data showing bilinear-type distributionJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 11 2005Peter Buchwald Abstract A major impediment of many QSAR-type analyses is that the data show a maximum or minimum and can no longer be adequately described by linear functions that provide unrivaled simplicity and usually give good description over more restricted ranges. Here, a general linearized biexponential (LinBiExp) model is proposed that can adequately describe data showing bilinear-type distribution as a function of not just often-employed lipophilicity descriptors (e.g., log P) but as a function of any descriptor (e.g., molecular volume). Contrary to Hansch-type parabolic models, LinBiExp allows the natural extension of linear models and fitting of asymmetrical data. It is also more general and intuitive than Kubinyi's model as it has a more natural functional form. It was obtained by a differential equation-based approach starting from very general assumptions that cover both static equilibriums and first-order kinetic processes and that involve abstract processes through which the concentration of the compound of interest in an assumed "effect" compartment is connected to its "external" concentration. Physicochemical aspects placing LinBiExp within the framework of linear free energy relationship (LFER) approaches are presented together with illustrative applications in various fields such as toxicity, antimicrobial activity, anticholinergic activity, and glucocorticoid receptor binding. © 2005 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 94:2355-2379, 2005 [source] Another look into the effect of premarital cohabitation on duration of marriage: an approach based on matchingJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009Stefano Mazzuco Summary., The paper proposes an alternative approach to studying the effect of premarital cohabitation on subsequent duration of marriage on the basis of a strong ignorability assumption. The approach is called propensity score matching and consists of computing survival functions conditional on a function of observed variables (the propensity score), thus eliminating any selection that is derived from these variables. In this way, it is possible to identify a time varying effect of cohabitation without making any assumption either regarding its shape or the functional form of covariate effects. The output of the matching method is the difference between the survival functions of treated and untreated individuals at each time point. Results show that the cohabitation effect on duration of marriage is indeed time varying, being close to zero for the first 2,3 years and rising considerably in the following years. [source] Testing for Neglected Nonlinearity in Cointegrating Relationships,JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2007Andrew P. Blake C32; C45 Abstract., This article proposes pure significance tests for the absence of nonlinearity in cointegrating relationships. No assumption of the functional form of the nonlinearity is made. It is envisaged that the application of such tests could form the first step towards specifying a nonlinear cointegrating relationship for empirical modelling. The asymptotic and small sample properties of our tests are investigated, where special attention is paid to the role of nuisance parameters and a potential resolution using the bootstrap. [source] Fundamental Molecular Weight Distribution of RAFT PolymersMACROMOLECULAR REACTION ENGINEERING, Issue 5 2008Hidetaka Tobita Abstract The molecular weight distribution formed in an ideal reversible addition-fragmentation chain transfer (RAFT)-mediated radical polymerization is considered theoretically. In this polymerization, the addition to the RAFT agent is reversible, and the active period on the same chain could be repeated, via the two-armed intermediate, with probability 1/2. This possible repetition is accounted for by introducing a new concept, the overall active/dormant periods. With this method, the apparent functional form of the molecular weight distribution (MWD) reduces to that proposed for the ideal living radical polymers (Tobita, Macromol. Theory Simul. 2006, 15, 12). The repetition results in a broader MWD than without the repetition. The formulae for the average molecular weights formed in batch and a continuous stirred tank reactor are also presented. [source] Semiparametric Estimation of a Duration ModelOXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2001A. Alonso Anton Within the framework of the proportional hazard model proposed in Cox (1972), Han and Hausman (1990) consider the logarithm of the integrated baseline hazard function as constant in each time period. We, however, proposed an alternative semiparametric estimator of the parameters of the covariate part. The estimator is considered as semiparametric since no prespecified functional form for the error terms (or certain convolution) is needed. This estimator, proposed in Lewbel (2000) in another context, shows at least four advantages. The distribution of the latent variable error is unknown and may be related to the regressors. It takes into account censored observations, it allows for heterogeneity of unknown form and it is quite easy to implement since the estimator does not require numerical searches. Using the Spanish Labour Force Survey, we compare empirically the results of estimating several alternative models, basically on the estimator proposed in Han and Hausman (1990) and our semiparametric estimator. [source] Principles for modeling propensity scores in medical research: a systematic literature review,PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 12 2004Sherry Weitzen PhD Abstract Purpose To document which established criteria for logistic regression modeling researchers consider when using propensity scores in observational studies. Methods We performed a systematic review searching Medline and Science Citation to identify observational studies published in 2001 that addressed clinical questions using propensity score methods to adjust for treatment assignment. We abstracted aspects of propensity score model development (e.g. variable selection criteria, continuous variables included in correct functional form, interaction inclusion criteria), model discrimination and goodness of fit for 47 studies meeting inclusion criteria. Results We found few studies reporting on the propensity score model development or evaluation of model fit. Conclusions Reporting of aspects related to propensity score model development is limited and raises questions about the value of these principles in developing propensity scores from which unbiased treatment effects are estimated. Copyright © 2004 John Wiley & Sons, Ltd. [source] The quantification of carbon dioxide in humid air and exhaled breath by selected ion flow tube mass spectrometryRAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 10 2009David Smith The reactions of carbon dioxide, CO2, with the precursor ions used for selected ion flow tube mass spectrometry, SIFT-MS, analyses, viz. H3O+, NO+ and O, are so slow that the presence of CO2 in exhaled breath has, until recently, not had to be accounted for in SIFT-MS analyses of breath. This has, however, to be accounted for in the analysis of acetaldehyde in breath, because an overlap occurs of the monohydrate of protonated acetaldehyde and the weakly bound adduct ion, H3O+CO2, formed by the slow association reaction of the precursor ion H3O+ with CO2 molecules. The understanding of the kinetics of formation and the loss rates of the relevant ions gained from experimentation using the new generation of more sensitive SIFT-MS instruments now allows accurate quantification of CO2 in breath using the level of the H3O+CO2 adduct ion. However, this is complicated by the rapid reaction of H3O+CO2 with water vapour molecules, H2O, that are in abundance in exhaled breath. Thus, a study has been carried out of the formation of this adduct ion by the slow three-body association reaction of H3O+ with CO2 and its rapid loss in the two-body reaction with H2O molecules. It is seen that the signal level of the H3O+CO2 adduct ion is sensitively dependent on the humidity (H2O concentration) of the sample to be analysed and a functional form of this dependence has been obtained. This has resulted in an appropriate extension of the SIFT-MS software and kinetics library that allows accurate measurement of CO2 levels in air samples, ranging from very low percentage levels (0.03% typical of tropospheric air) to the 6% level that is about the upper limit in exhaled breath. Thus, the level of CO2 can be traced through single time exhalation cycles along with that of water vapour, also close to the 6% level, and of trace gas metabolites that are present at only a few parts-per-billion. This has added a further dimension to the analysis of major and trace compounds in breath using SIFT-MS. Copyright © 2009 John Wiley & Sons, Ltd. [source] |