Home About us Contact | |||
Information Content (information + content)
Kinds of Information Content Selected AbstractsThe Notching Rule for Subordinated Debt and the Information Content of Debt RatingFINANCIAL MANAGEMENT, Issue 2 2010Kose John This paper provides new evidence regarding the information content of debt ratings. We show that noninvestment grade subordinated issues are consistently priced too high (the yield is too low), and the reverse is true for some investment grade bonds. We relate this empirical bias to a notching rule of thumb that is used in order to rate subordinated debt without expending additional resources for information production. We propose an explanation for these findings based upon a balance between an attempt to please the companies that pay the raters versus a concern for lawsuits and regulatory investigations should ratings be too optimistic. [source] On the Information Content of Bank Loan-loss Disclosures: A Theory and Evidence from JapanINTERNATIONAL REVIEW OF FINANCE, Issue 1 2000Scott Gibson We develop a model in which banks use loan-loss disclosures to signal private information about the credit quality of their loan portfolios. The cross-sectional predictions generated by the model are shown to help to explain previously documented counterintuitive empirical regularities for US banks. We also take advantage of a recent Japanese regulatory policy shift, which first forbade the reporting of restructured loan balances and then forced full disclosure. This policy shift allows us to address a common difficulty in testing signalling theories, in that we are able to construct a timely proxy for the private information that we allege is being signalled. Consistent with our signalling model, we find that banks taking the largest write-offs turn out later to be the strongest banks, with the fewest restructured loans. [source] Assessing the Information Content of Mark-to-Market Accounting with Mixed Attributes: The Case of Cash Flow HedgesJOURNAL OF ACCOUNTING RESEARCH, Issue 2 2007FRANK GIGLER ABSTRACT We examine how outsiders rationally interpret a reported loss on derivatives when the application of mark-to-market accounting to cash flow hedges creates a mixed attribute problem. We find that because of the mixed attribute problem, the information content of mark-to-market accounting is related to the information content of historical cost accounting in a very specific way. This relationship allows us to identify the circumstances under which mark-to-market accounting facilitates and when it detracts from the objective of providing an early warning of potential financial distress. We show that the reporting of an impending derivative loss by a distressed firm can actually lead outsiders to infer that the firm is in a better financial position than what they would have inferred under the silence associated with historical cost accounting. Without the mixed attribute problem, mark-to-market accounting would always yield more accurate assessments of the firm's financial position. [source] Discussion of Assessing the Information Content of Mark-to-Market Accounting with Mixed Attributes: The Case of Cash Flow Hedges and Market Transparency and the Accounting RegimeJOURNAL OF ACCOUNTING RESEARCH, Issue 2 2007HYUN SONG SHIN First page of article [source] Managerial Ownership, Information Content of Earnings, and Discretionary Accruals in a Non,US SettingJOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 7-8 2002Gorm Gabrielsen This study employs Danish data to examine the empirical relationship between the proportion of managerial ownership and two characteristics of accounting earnings: the information content of earnings and the magnitude of discretionary accruals. In previous research concerning American firms, Warfield et al. (1995) document a positive relationship between managerial ownership and the information content of earnings, and a negative relationship between managerial ownership and discretionary accruals. We question the generality of the Warfield et al. result, as the ownership structure found in most other countries, including Denmark, deviates from the US ownership configuration. In fact, Danish data indicate that the information content of earnings is inversely related to managerial ownership. [source] International Variation in Bank Accounting Information ContentJOURNAL OF INTERNATIONAL FINANCIAL MANAGEMENT & ACCOUNTING, Issue 3 2008Ronald Zhao This study explores the cross-country impact of financial system and banking regulations on the information content of bank earnings and book value. Test results provide empirical evidence that financial system and banking regulations have a joint effect on the association of equity price with earnings and book value components in Germany, France, the United Kingdom and United States. This effect is explainable by the objective bank function, which shows that earnings of the period determine the terminal book value, thus consistent with the clean surplus accounting approach. Cross-country variation in bank accounting information content calls for caution in interpreting international bank financial and operating ratios. [source] Ownership Structure and Accounting Information Content: Evidence from FranceJOURNAL OF INTERNATIONAL FINANCIAL MANAGEMENT & ACCOUNTING, Issue 3 2007Ronald Zhao This paper investigates how family and bank ownership affect the accounting information content of French firms. In Continental Europe, the existence of block-holders triggers specific corporate governance issues, including the transparency of financial reporting. Our test results for the clean surplus model show that book value carries a significantly greater weight for family-controlled firms. This finding is attributed to their lack of incentive to report timely and relevant earnings to outside (minority) investors. In contrast, bank owners are under more market pressure to achieve earnings persistence through the use of accounting accruals. Bank ownership is also associated with higher levels of debt. These results are consistent with findings that in code law countries, insiders dominate as a source of finance, and financial reporting is aimed at creditor protection. [source] Measuring the Information Content of the Beige Book: A Mixed Data Sampling ApproachJOURNAL OF MONEY, CREDIT AND BANKING, Issue 1 2009MICHELLE T. ARMESTO data sampling frequency; textual analysis; DICTION; Beige Book Studies of the predictive ability of the Federal Reserve's Beige Book for aggregate output and employment have proven inconclusive. This might be attributed, in part, to its irregular release schedule. We use a model that allows for data sampling at mixed frequencies to analyze the predictive power of the Beige Book. We find that the Beige Book's national summary and District reports predict GDP and aggregate employment and that most District reports provide information content for regional employment. In addition, there appears to be an asymmetry in the predictive content of the Beige Book language. [source] Variation, Natural Selection, and Information Content , A SimulationCHEMISTRY & BIODIVERSITY, Issue 10 2007Bernard Testa Abstract In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms that propel biological evolution. Variation implies changes in the gene pool of a population, enlarging the genetic variability from which natural selection can choose. But in the absence of natural selection, variation causes dissipation and randomization. Natural selection, in contrast, constrains this variability by decreasing the survival and fertility of the less-adapted organisms. The objective of this study is to propose a highly simplified simulation of variation and natural selection, and to relate the observed evolutionary changes in a population to its information content. The model involves an imaginary population of individuals. A quantifiable character allows the individuals to be categorized into bins. The distribution of bins (a histogram) was assumed to be Gaussian. The content of each bin was calculated after one to twelve cycles, each cycle spanning N generations (N being undefined). In a first study, selection was simulated in the absence of variation. This was modeled by assuming a differential fertility factor F that increased linearly from the lower bins (F<1.00) to the higher bins (F>1.00). The fertility factor was applied as a multiplication factor during each cycle. Several ranges of fertility were investigated. The resulting histograms became skewed to the right. In a second study, variation was simulated in the absence of selection. This was modeled by assuming that during each cycle each bin lost a fixed percentage of its content (variation factor Y) to its two adjacent bins. The resulting histograms became broader and flatter, while retaining their bilateral symmetry. Different values of Y were monitored. In a third study, various values of F and Y were combined. Our model allows the straightforward application of Shannon's equation and the calculation of a Shannon -entropy (SE) values for each histogram. Natural selection was, thus, shown to result in a progressive decrease in SE as a function of F. In other words, natural selection, when acting alone, progressively increased the information content of the population. In contrast, variation resulted in a progressive increase in SE as a function of Y. In other words, variation acting alone progressively decreased the information content of a population. When both factors, F and Y, were applied simultaneously, their relative weight determined the progressive change in SE. [source] Comparison of single-nucleotide polymorphisms and microsatellite markers for linkage analysis in the COGA and simulated data sets for Genetic Analysis Workshop 14: Presentation Groups 1, 2, and 3GENETIC EPIDEMIOLOGY, Issue S1 2005Marsha A. Wilcox Abstract The papers in presentation groups 1,3 of Genetic Analysis Workshop 14 (GAW14) compared microsatellite (MS) markers and single-nucleotide polymorphism (SNP) markers for a variety of factors, using multiple methods in both data sets provided to GAW participants. Group 1 focused on data provided from the Collaborative Study on the Genetics of Alcoholism (COGA). Group 2 focused on data simulated for the workshop. Group 3 contained analyses of both data sets. Issues examined included: information content, signal strength, localization of the signal, use of haplotype blocks, population structure, power, type I error, control of type I error, the effect of linkage disequilibrium, and computational challenges. There were several broad resulting observations. 1) Information content was higher for dense SNP marker panels than for MS panels, and dense SNP markers sets appeared to provide slightly higher linkage scores and slightly higher power to detect linkage than MS markers. 2) Dense SNP panels also gave higher type I errors, suggesting that increased test thresholds may be needed to maintain the correct error rate. 3) Dense SNP panels provided better trait localization, but only in the COGA data, in which the MS markers were relatively loosely spaced. 4) The strength of linkage signals did not vary with the density of SNP panels, once the marker density was ,1 SNP/cM. 5) Analyses with SNPs were computationally challenging, and identified areas where improvements in analysis tools will be necessary to make analysis practical for widespread use. Genet. Epidemiol. 29:(Suppl. 1): S7,S28, 2005. © 2005 Wiley-Liss, Inc. [source] Information content of extended trading for index futuresTHE JOURNAL OF FUTURES MARKETS, Issue 9 2004Louis T. W. Cheng The recent extension of trading hours for Hang Seng Index Futures provides an opportunity to examine whether extended futures trading contains useful information about spot returns. Using the weighted price contribution measure, we find that pre-open futures trades are associated with significant price discovery. We extend the model from T. Hiraki, E. D. Maberly, and N. Takezawa (1995) and adjust for the existence of a pre-open trading session and the overnight trading of cross-listed shares in London. Our results indicate that extended trading for index futures contains useful information in explaining subsequent spot returns during the trading day. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:861,886, 2004 [source] Parameter identification of framed structures using an improved finite element model-updating method,Part I: formulation and verificationEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2007Eunjong Yu Abstract In this study, we formulate an improved finite element model-updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model-updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least-squares (BVLS) method, which incorporates the usual upper/lower-bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity-based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced-vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd. [source] Methods to account for spatial autocorrelation in the analysis of species distributional data: a reviewECOGRAPHY, Issue 5 2007Carsten F. Dormann Species distributional or trait data based on range map (extent-of-occurrence) or atlas survey data often display spatial autocorrelation, i.e. locations close to each other exhibit more similar values than those further apart. If this pattern remains present in the residuals of a statistical model based on such data, one of the key assumptions of standard statistical analyses, that residuals are independent and identically distributed (i.i.d), is violated. The violation of the assumption of i.i.d. residuals may bias parameter estimates and can increase type I error rates (falsely rejecting the null hypothesis of no effect). While this is increasingly recognised by researchers analysing species distribution data, there is, to our knowledge, no comprehensive overview of the many available spatial statistical methods to take spatial autocorrelation into account in tests of statistical significance. Here, we describe six different statistical approaches to infer correlates of species' distributions, for both presence/absence (binary response) and species abundance data (poisson or normally distributed response), while accounting for spatial autocorrelation in model residuals: autocovariate regression; spatial eigenvector mapping; generalised least squares; (conditional and simultaneous) autoregressive models and generalised estimating equations. A comprehensive comparison of the relative merits of these methods is beyond the scope of this paper. To demonstrate each method's implementation, however, we undertook preliminary tests based on simulated data. These preliminary tests verified that most of the spatial modeling techniques we examined showed good type I error control and precise parameter estimates, at least when confronted with simplistic simulated data containing spatial autocorrelation in the errors. However, we found that for presence/absence data the results and conclusions were very variable between the different methods. This is likely due to the low information content of binary maps. Also, in contrast with previous studies, we found that autocovariate methods consistently underestimated the effects of environmental controls of species distributions. Given their widespread use, in particular for the modelling of species presence/absence data (e.g. climate envelope models), we argue that this warrants further study and caution in their use. To aid other ecologists in making use of the methods described, code to implement them in freely available software is provided in an electronic appendix. [source] Refining the results of a whole-genome screen based on 4666 microsatellite markers for defining predisposition factors for multiple sclerosisELECTROPHORESIS, Issue 14 2004René Gödde Abstract Multiple sclerosis (MS) is a demyelinating disease of the central nervous system with a complex genetic background. In order to identify loci associated with the disease, we had performed a genome screen initially using 6000 microsatellite markers in pooled DNA samples of 198 MS patients and 198 controls. Here, we report on the detailed reanalysis of this set of data. Distinctive features of microsatellites genotyped in pooled DNA causing false-positive association or masking existing association were met by improved evaluation and refined correction factors in the statistical analyses. In order to assess potential errors introduced by DNA pooling and genotyping, we resurveyed the experiment in a subset of microsatellite markers using de novo -composed DNA pools. True MS associations of markers were verified via genotyping all individual DNA samples comprised in the pools. Microsatellites share characteristically superb information content but they do not lend themselves to automation in very large scale formats. Especially after DNA pooling many artifacts of individual marker systems require special attention and treatment. Therefore, in the near future comprehensive whole-genome screens may rather be performed by typing single nucleotide polymorphisms on chip-based platforms. [source] Responses of Redfronted Lemurs to Experimentally Modified Alarm Calls: Evidence for Urgency-Based Changes in Call StructureETHOLOGY, Issue 9 2002Claudia Fichtel Alarm calls can serve as model systems with which to study this general question. Therefore, we examined the information content of terrestrial predator alarm calls of redfronted lemurs (Eulemur fulvus rufus), group-living Malagasy primates. Redfronted lemurs give specific alarm calls only towards raptors, whereas calls given in response to terrestrial predators (woofs) are also used in other situations characterized by high arousal. Woofs may therefore have the potential to express the perceived risk of a given threat. In order to examine whether different levels of arousal are expressed in call structure, we analysed woofs given during inter-group encounters or in response to playbacks of a barking dog, assuming that animals engaged in inter-group encounters experience higher arousal than during the playbacks of dog barks. A multivariate acoustic analysis revealed that calls given during group encounters were characterized by higher frequencies than calls given in response to playbacks of dog barks. In order to examine whether this change in call structure is salient to conspecifics, we conducted playback experiments with woofs, modified in either amplitude or frequencies. Playbacks of calls with increased frequency or amplitude elicited a longer orienting response, suggesting that different levels of arousal are expressed in call structure and provide meaningful information for listeners. In conclusion, the results of our study indicate that the information about the sender's affective state is expressed in the structure of vocalizations. [source] The optimization of protein secondary structure determination with infrared and circular dichroism spectraFEBS JOURNAL, Issue 14 2004Keith A. Oberg We have used the circular dichroism and infrared spectra of a specially designed 50 protein database [Oberg, K.A., Ruysschaert, J.M. & Goormaghtigh, E. (2003) Protein Sci. 12, 2015,2031] in order to optimize the accuracy of spectroscopic protein secondary structure determination using multivariate statistical analysis methods. The results demonstrate that when the proteins are carefully selected for the diversity in their structure, no smaller subset of the database contains the necessary information to describe the entire set. One conclusion of the paper is therefore that large protein databases, observing stringent selection criteria, are necessary for the prediction of unknown proteins. A second important conclusion is that only the comparison of analyses run on circular dichroism and infrared spectra independently is able to identify failed solutions in the absence of known structure. Interestingly, it was also found in the course of this study that the amide II band has high information content and could be used alone for secondary structure prediction in place of amide I. [source] The Notching Rule for Subordinated Debt and the Information Content of Debt RatingFINANCIAL MANAGEMENT, Issue 2 2010Kose John This paper provides new evidence regarding the information content of debt ratings. We show that noninvestment grade subordinated issues are consistently priced too high (the yield is too low), and the reverse is true for some investment grade bonds. We relate this empirical bias to a notching rule of thumb that is used in order to rate subordinated debt without expending additional resources for information production. We propose an explanation for these findings based upon a balance between an attempt to please the companies that pay the raters versus a concern for lawsuits and regulatory investigations should ratings be too optimistic. [source] Taxonomy of Late Jurassic diplodocid sauropods from Tendaguru (Tanzania)FOSSIL RECORD-MITTEILUNGEN AUS DEM MUSEUM FUER NATURKUNDE, Issue 1 2009Kristian Remes Abstract The Late Jurassic (Tithonian) Tendaguru Beds of Tanzania yielded one of the richest sauropod faunas known, including the diplodocines Tornieria africana (Fraas, 1908) and Australodocus bohetii Remes, 2007, the only known representatives of their group on the southern continents. Historically, the holotypes and referred material of both taxa plus dozens of additional specimens had been subsumed under the term "Barosaurus africanus " (Fraas, 1908). Here, the taxonomic status of the referred elements is reviewed by evaluating the phylogenetic information content of their anatomical characters, in order to provide a firm footing for further studies (e.g. of morphometrics, histology, and phylogeny of the Tendaguru sauropods). Some of the material shows diplodocine synapomorphies and may belong to either Tornieria or Australodocus, while other specimens are diagnostic only on higher taxonomic levels (Diplodocidae, Flagellicaudata, or Diplodocoidea indet.). The isolated limb elements in most cases lack phylogenetically diagnostic characters. Generally, the "Barosaurus africanus " sample shows a substantial grade of morphological variation, and it cannot be ruled out that there are more flagellicaudatans represented in the Tendaguru material than the diplodocines and dicraeosaurids already known. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Cash flow disaggregation and the prediction of future earningsACCOUNTING & FINANCE, Issue 1 2010Neal Arthur G11; G23 Abstract We examine the incremental information content of the components of cash flows from operations (CFO). Specifically the research question examined in this paper is whether models incorporating components of CFO to predict future earnings provide lower prediction errors than models incorporating simply net CFO. We use Australian data in this setting as all companies were required to provide information using the direct method during the sample period. We find that the cash flow components model is superior to an aggregate cash flow model in terms of explanatory power and predictive ability for future earnings; and that disclosure of non-core (core) cash flows components is (not) useful in both respects. Our results are of relevance to investors and analysts in estimating earnings forecasts, managers of firms in regulators' domains where choice is provided with respect to the disclosure of CFO and also to regulators' deliberations on disclosure requirements and recommendations. [source] Market's perception of deferred tax accrualsACCOUNTING & FINANCE, Issue 4 2009Cheryl Chang G14; M41 Abstract This study investigates the value relevance and incremental information content of deferred tax accruals reported under the ,income statement method' (AASB 1020 Accounting for Income Taxes) over the period 2001,2004. Our findings suggest that deferred tax accruals are viewed as assets and liabilities. We document a positive relation between recognized deferred tax assets and firm value using the levels model, while the results from the returns model suggest that deferred tax liabilities reflect future tax payments. The balance of unrecognized deferred tax assets provides a negative signal to the market about future profitability, particularly for companies from the materials and energy sectors and loss-makers. [source] Microsatellites versus single-nucleotide polymorphisms in confidence interval estimation of disease lociGENETIC EPIDEMIOLOGY, Issue 1 2006Charalampos Papachristou Abstract With cost-effective high-throughput Single Nucleotide Polymorphism (SNP) arrays now becoming widely available, it is highly anticipated that SNPs will soon become the choice of markers in whole genome screens. This optimism raises a great deal of interest in assessing whether dense SNP maps offer at least as much information as their microsatellite (MS) counterparts. Factors considered to date include information content, strength of linkage signals, and effect of linkage disequilibrium. In the current report, we focus on investigating the relative merits of SNPs vs. MS markers for disease gene localization. For our comparisons, we consider three novel confidence interval estimation procedures based on confidence set inference (CSI) using affected sib-pair data. Two of these procedures are multipoint in nature, enabling them to capitalize on dense SNPs with limited heterozygosity. The other procedure makes use of markers one at a time (two-point), but is much more computationally efficient. In addition to marker type, we also assess the effects of a number of other factors, including map density and marker heterozygosity, on disease gene localization through an extensive simulation study. Our results clearly show that confidence intervals derived based on the CSI multipoint procedures can place the trait locus in much shorter chromosomal segments using densely saturated SNP maps as opposed to using sparse MS maps. Finally, it is interesting (although not surprising) to note that, should one wish to perform a quick preliminary genome screening, then the two-point CSI procedure would be a preferred, computationally cost-effective choice. Genet. Epidemiol. 30:3,17, 2006. © 2005 Wiley-Liss, Inc. [source] Comparison of single-nucleotide polymorphisms and microsatellite markers for linkage analysis in the COGA and simulated data sets for Genetic Analysis Workshop 14: Presentation Groups 1, 2, and 3GENETIC EPIDEMIOLOGY, Issue S1 2005Marsha A. Wilcox Abstract The papers in presentation groups 1,3 of Genetic Analysis Workshop 14 (GAW14) compared microsatellite (MS) markers and single-nucleotide polymorphism (SNP) markers for a variety of factors, using multiple methods in both data sets provided to GAW participants. Group 1 focused on data provided from the Collaborative Study on the Genetics of Alcoholism (COGA). Group 2 focused on data simulated for the workshop. Group 3 contained analyses of both data sets. Issues examined included: information content, signal strength, localization of the signal, use of haplotype blocks, population structure, power, type I error, control of type I error, the effect of linkage disequilibrium, and computational challenges. There were several broad resulting observations. 1) Information content was higher for dense SNP marker panels than for MS panels, and dense SNP markers sets appeared to provide slightly higher linkage scores and slightly higher power to detect linkage than MS markers. 2) Dense SNP panels also gave higher type I errors, suggesting that increased test thresholds may be needed to maintain the correct error rate. 3) Dense SNP panels provided better trait localization, but only in the COGA data, in which the MS markers were relatively loosely spaced. 4) The strength of linkage signals did not vary with the density of SNP panels, once the marker density was ,1 SNP/cM. 5) Analyses with SNPs were computationally challenging, and identified areas where improvements in analysis tools will be necessary to make analysis practical for widespread use. Genet. Epidemiol. 29:(Suppl. 1): S7,S28, 2005. © 2005 Wiley-Liss, Inc. [source] Linkage mapping methods applied to the COGA data set: Presentation Group 4 of Genetic Analysis Workshop 14GENETIC EPIDEMIOLOGY, Issue S1 2005E. Warwick Daw Abstract Presentation Group 4 participants analyzed the Collaborative Study on the Genetics of Alcoholism data provided for Genetic Analysis Workshop 14. This group examined various aspects of linkage analysis and related issues. Seven papers included linkage analyses, while the eighth calculated identity-by-descent (IBD) probabilities. Six papers analyzed linkage to an alcoholism phenotype: ALDX1 (four papers), ALDX2 (one paper), or a combination both (one paper). Methods used included Bayesian variable selection coupled with Haseman-Elston regression, recursive partitioning to identify phenotype and covariate groupings that interact with evidence for linkage, nonparametric linkage regression modeling, affected sib-pair linkage analysis with discordant sib-pair controls, simulation-based homozygosity mapping in a single pedigree, and application of a propensity score to collapse covariates in a general conditional logistic model. Alcoholism linkage was found with ,2 of these approaches on chromosomes 2, 4, 6, 7, 9, 14, and 21. The remaining linkage paper compared the utility of several single-nucleotide polymorphism (SNP) and microsatellite marker maps for Monte Carlo Markov chain combined oligogenic segregation and linkage analysis, and analyzed one of the electrophysiological endophenotypes, ttth1, on chromosome 7. Linkage was found with all marker sets. The last paper compared the multipoint IBD information content of several SNP sets and the microsatellite set, and found that while all SNP sets examined contained more information than the microsatellite set, most of the information contained in the SNP sets was captured by a subset of the SNP markers with ,1-cM marker spacing. From these papers, we highlight three points: a 1-cM SNP map seems to capture most of the linkage information, so denser maps do not appear necessary; careful and appropriate use of covariates can aid linkage analysis; and sources of increased gene-sharing between relatives should be accounted for in analyses. Genet. Epidemiol. 29(Suppl. 1):S29,S34, 2005. © 2005 Wiley-Liss, Inc. [source] Evolutionary-based grouping of haplotypes in association analysisGENETIC EPIDEMIOLOGY, Issue 3 2005Jung-Ying Tzeng Abstract Haplotypes incorporate more information about the underlying polymorphisms than do genotypes for individual SNPs, and are considered as a more informative format of data in association analysis. To model haplotypes requires high degrees of freedom, which could decrease power and limit a model's capacity to incorporate other complex effects, such as gene-gene interactions. Even within haplotype blocks, high degrees of freedom are still a concern unless one chooses to discard rare haplotypes. To increase the efficiency and power of haplotype analysis, we adapt the evolutionary concepts of cladistic analyses and propose a grouping algorithm to cluster rare haplotypes to the corresponding ancestral haplotypes. The algorithm determines the cluster bases by preserving common haplotypes using a criterion built on the Shannon information content. Each haplotype is then assigned to its appropriate clusters probabilistically according to the cladistic relationship. Through this algorithm, we perform association analysis based on groups of haplotypes. Simulation results indicate power increases for performing tests on the haplotype clusters when compared to tests using original haplotypes or the truncated haplotype distribution. Genet. Epidemiol. © 2005 Wiley-Liss, Inc. [source] Joint full-waveform analysis of off-ground zero-offset ground penetrating radar and electromagnetic induction synthetic data for estimating soil electrical propertiesGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010D. Moghadas SUMMARY A joint analysis of full-waveform information content in ground penetrating radar (GPR) and electromagnetic induction (EMI) synthetic data was investigated to reconstruct the electrical properties of multilayered media. The GPR and EMI systems operate in zero-offset, off-ground mode and are designed using vector network analyser technology. The inverse problem is formulated in the least-squares sense. We compared four approaches for GPR and EMI data fusion. The two first techniques consisted of defining a single objective function, applying different weighting methods. As a first approach, we weighted the EMI and GPR data using the inverse of the data variance. The ideal point method was also employed as a second weighting scenario. The third approach is the naive Bayesian method and the fourth technique corresponds to GPR,EMI and EMI,GPR sequential inversions. Synthetic GPR and EMI data were generated for the particular case of a two-layered medium. Analysis of the objective function response surfaces from the two first approaches demonstrated the benefit of combining the two sources of information. However, due to the variations of the GPR and EMI model sensitivities with respect to the medium electrical properties, the formulation of an optimal objective function based on the weighting methods is not straightforward. While the Bayesian method relies on assumptions with respect to the statistical distribution of the parameters, it may constitute a relevant alternative for GPR and EMI data fusion. Sequential inversions of different configurations for a two layered medium show that in the case of high conductivity or permittivity for the first layer, the inversion scheme can not fully retrieve the soil hydrogeophysical parameters. But in the case of low permittivity and conductivity for the first layer, GPR,EMI inversion provides proper estimation of values compared to the EMI,GPR inversion. [source] A semi-parametric gap-filling model for eddy covariance CO2 flux time series dataGLOBAL CHANGE BIOLOGY, Issue 9 2006VANESSA J. STAUCH Abstract This paper introduces a method for modelling the deterministic component of eddy covariance CO2 flux time series in order to supplement missing data in these important data sets. The method is based on combining multidimensional semi-parametric spline interpolation with an assumed but unstated dependence of net CO2 flux on light, temperature and time. We test the model using a range of synthetic canopy data sets generated using several canopy simulation models realized for different micrometeorological and vegetation conditions. The method appears promising for filling large systematic gaps providing the associated missing data do not overerode critical information content in the conditioning data used for the model optimization. [source] Volatility linkages of the equity, bond and money markets: an implied volatility approachACCOUNTING & FINANCE, Issue 1 2009Kent Wang G12; G14 Abstract This study proposes an alternative approach for examining volatility linkages between Standard & Poor's 500, Eurodollar futures and 30 year Treasury Bond futures markets using implied volatility from the three markets. Simple correlation analysis between implied volatilities in the three markets is used to assess market correlations. Spurious correlation effects are considered and controlled for. I find that correlations between implied volatilities in the equity, money and bond markets are positive, strong and robust. Furthermore, I replicate the approach of Fleming, Kirby and Ostdiek (1998) to check the substitutability of the implied volatility approach and find that the results are nearly identical; I conclude that my approach is simple, robust and preferable in practice. I also argue that the results from this paper provide supportive evidence on the information content of implied volatilities in the equity, bond and money markets. [source] Toward a Total Synthesis of Macrocyclic Jatrophane Diterpenes , Concise Route to a Highly Functionalized Cyclopentane Key IntermediateHELVETICA CHIMICA ACTA, Issue 6 2005Johann Mulzer A total synthesis of the biologically potent jatrophane diterpenes pepluanin A (1) and euphosalicin A (2) is being aimed at. En route to these targets, a concise synthesis of the nonracemic cyclopentane building block 74 was developed. Key steps were a Claisen,Eschenmoser rearrangement of the enantiomerically enriched allylic alcohol 14 to amide 34 (Scheme,7), a hydroxy-lactonization of 40 to 43 (Scheme,9), followed by trans -lactonization to 72, which was subjected to a Davis hydroxylation to 69 (Scheme,17). Eventually, compound 69 was converted into the enol triflate 74. This material should prove suitable for an annulation of the macrocyclic ring characteristic of the desired jatrophanes 1 and 2. Less-successful approaches are also discussed due to their intrinsically valuable information content. [source] Spatial firing properties of lateral septal neuronsHIPPOCAMPUS, Issue 8 2006Yusaku Takamura Abstract The present study describes the spatial firing properties of neurons in the lateral septum (LS). LS neuronal activity was recorded in rats as they performed a spatial navigation task in an open field. In this task, the rat acquired an intracranial self-stimulation reward when it entered a certain place, a location that varied randomly from trial to trial. Of 193 neurons recorded in the LS, 81 showed place-related activity. The majority of the tested neurons changed place-related activity when spatial relations between environmental cues were altered by rotating intrafield (proximal) cues. The comparison of place activities between LS place-related neurons recorded in the present study and hippocampal place cells recorded in our previous study, using identical behavioral and recording procedures, revealed that spatial parameters (spatial information content, coherence, and cluster size) were smaller in the LS than in the hippocampus. Of the 193 LS neurons, 86 were influenced by intracranial self-stimulation rewards; 31 of these 86 were also place-related. These results, together with previous anatomical and behavioral observations, suggest that the spatial information sent from the hippocampus to the LS is modulated by and interacts with signals related to reward in the LS. © 2006 Wiley-Liss, Inc. [source] Computational constraints between retrieving the past and predicting the future, and the CA3-CA1 differentiationHIPPOCAMPUS, Issue 5 2004Alessandro Treves Abstract The differentiation between the CA3 and CA1 fields of the mammalian hippocampus is one of the salient traits that set it apart from the organization of the homologue medial wall in reptiles and birds. CA3 is widely thought to function as an autoassociator, but what do we need CA1 for? Based on evidence for a specific role of CA1 in temporal processing, I have explored the hypothesis that the differentiation between CA3 and CA1 may help solve a computational conflict. The conflict is between pattern completion, or integrating current sensory information on the basis of memory, and prediction, or moving from one pattern to the next in a stored sequence. CA3 would take care of the former, while CA1 would concentrate on the latter. I have found the hypothesis to be only weakly supported by neural network simulations. The conflict indeed exists, but two mechanisms that would relate more directly to a functional CA3-CA1 differentiation were found unable to produce genuine prediction. Instead, a simple mechanism based on firing frequency adaptation in pyramidal cells was found to be sufficient for prediction, with the degree of adaptation as the crucial parameter balancing retrieval with prediction. The differentiation between the architectures of CA3 and CA1 has a minor but significant, and positive, effect on this balance. In particular, for a fixed anticipatory interval in the model, it increases significantly the information content of hippocampal outputs. There may therefore be just a simple quantitative advantage in differentiating the connectivity of the two fields. Moreover, different degrees of adaptation in CA3 and CA1 cells were not found to lead to better performance, further undermining the notion of a functional dissociation. © 2004 Wiley-Liss, Inc. [source] |