Maximum Entropy (maximum + entropy)

Distribution by Scientific Domains

Terms modified by Maximum Entropy

  • maximum entropy method

  • Selected Abstracts


    Spatial prediction of categorical variables with the Bayesian Maximum Entropy approach: the Ooypolder case study

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 4 2004
    D. D'Or
    Summary Categorical variables such as water table status are often predicted using the indicator kriging (IK) formalism. However, this method is known to suffer from important limitations that are most frequently solved by ad hoc solutions and approximations. Recently, the Bayesian Maximum Entropy (BME) approach has proved its ability to predict categorical variables efficiently and in a flexible way. In this paper, we apply this approach to the Ooypolder data set for the prediction of the water table classes from a sample data set. BME is compared with IK using global as well as local criteria. The inconsistencies of the IK predictor are emphasized and it is shown how BME permits avoiding them. [source]


    GEOSTATISTICAL ESTIMATION OF HORIZONTAL HYDRAULIC CONDUCTIVITY FOR THE KIRKWOOD-COHANSEY AQUIFER,

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2004
    Vikram M. Vyas
    ABSTRACT: The Kirkwood-Cohansey aquifer has been identified as a critical source for meeting existing and expected water supply needs for southern New Jersey. Several contaminated sites exist in the region; their impact on the aquifer has to be evaluated using ground water flow and transport models. Ground water modeling depends on availability of measured hydrogeologic data (e.g., hydraulic conductivity, for parameterization of the modeling runs). However, field measurements of such critical data have inadequate spatial density, and their locations are often clustered. The goal of this study was to research, compile, and geocode existing data, then use geostatistics and advanced mapping methods to develop a map of horizontal hydraulic conductivity for the Kirkwood-Cohansey aquifer. Spatial interpolation of horizontal hydraulic conductivity measurements was performed using the Bayesian Maximum Entropy (BME) Method implemented in the BMELib code library. This involved the integration of actual measurements with soft information on likely ranges of hydraulic conductivity at a given location to obtain estimate maps. The estimation error variance maps provide an insight into the uncertainty associated with the estimates, and indicate areas where more information on hydraulic conductivity is required. [source]


    Gene movement and genetic association with regional climate gradients in California valley oak (Quercus lobata Née) in the face of climate change

    MOLECULAR ECOLOGY, Issue 17 2010
    VICTORIA L. SORK
    Abstract Rapid climate change jeopardizes tree populations by shifting current climate zones. To avoid extinction, tree populations must tolerate, adapt, or migrate. Here we investigate geographic patterns of genetic variation in valley oak, Quercus lobata Née, to assess how underlying genetic structure of populations might influence this species' ability to survive climate change. First, to understand how genetic lineages shape spatial genetic patterns, we examine historical patterns of colonization. Second, we examine the correlation between multivariate nuclear genetic variation and climatic variation. Third, to illustrate how geographic genetic variation could interact with regional patterns of 21st Century climate change, we produce region-specific bioclimatic distributions of valley oak using Maximum Entropy (MAXENT) models based on downscaled historical (1971,2000) and future (2070,2100) climate grids. Future climatologies are based on a moderate-high (A2) carbon emission scenario and two different global climate models. Chloroplast markers indicate historical range-wide connectivity via colonization, especially in the north. Multivariate nuclear genotypes show a strong association with climate variation that provides opportunity for local adaptation to the conditions within their climatic envelope. Comparison of regional current and projected patterns of climate suitability indicates that valley oaks grow in distinctly different climate conditions in different parts of their range. Our models predict widely different regional outcomes from local displacement of a few kilometres to hundreds of kilometres. We conclude that the relative importance of migration, adaptation, and tolerance are likely to vary widely for populations among regions, and that late 21st Century conditions could lead to regional extinctions. [source]


    Doppler broadening of annihilation radiation spectroscopy study using Richardson-Lucy, Maximum Entropy and Huber methods

    PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 10 2007
    D. P. Yu
    Abstract The Richardson-Lucy, Maximum Entropy and Huber regularization methods are popularly used in solving ill-posed inverse problems. This paper considers the use of these three methods in the deconvoluting DBARS (Doppler Broadening of Annihilation Radiation Spectroscopy) data. As DBARS data have a constant background on the high-energy side and a long exponential tail on the low-energy side, we check the different deconvolution schemes paying specific attention to the quality of the deconvolution at the peak and tail positions. Comparison of the three methods is made by testing on Monte-Carlo simulated data both in terms of the deconvoluted quality and computational resources required. Finally, we apply these methods to experimental DBARS data taken on polycrystalline metal samples. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Ecological niche modelling as a technique for assessing threats and setting conservation priorities for Asian slow lorises (Primates: Nycticebus)

    DIVERSITY AND DISTRIBUTIONS, Issue 2 2009
    J. S. Thorn
    ABSTRACT Aim, Data on geographical ranges are essential when defining the conservation status of a species, and in evaluating levels of human disturbance. Where locality data are deficient, presence-only ecological niche modelling (ENM) can provide insights into a species' potential distribution, and can aid in conservation planning. Presence-only ENM is especially important for rare, cryptic and nocturnal species, where absence is difficult to define. Here we applied ENM to carry out an anthropogenic risk assessment and set conservation priorities for three threatened species of Asian slow loris (Primates: Nycticebus). Location, Borneo, Java and Sumatra, Southeast Asia. Methods, Distribution models were built using maximum entropy (MaxEnt) ENM. We input 20 environmental variables comprising temperature, precipitation and altitude, along with species locality data. We clipped predicted distributions to forest cover and altitudinal data to generate remnant distributions. These were then applied to protected area (PA) and human land-use data, using specific criteria to define low-, medium- or high-risk areas. These data were analysed to pinpoint priority study sites, suitable reintroduction zones and protected area extensions. Results, A jackknife validation method indicated highly significant models for all three species with small sample sizes (n = 10 to 23 occurrences). The distribution models represented high habitat suitability within each species' geographical range. High-risk areas were most prevalent for the Javan slow loris (Nycticebus javanicus) on Java, with the highest proportion of low-risk areas for the Bornean slow loris (N. menagensis) on Borneo. Eighteen PA extensions and 23 priority survey sites were identified across the study region. Main conclusions, Discriminating areas of high habitat suitability lays the foundations for planning field studies and conservation initiatives. This study highlights potential reintroduction zones that will minimize anthropogenic threats to animals that are released. These data reiterate the conclusion of previous research, showing MaxEnt is a viable technique for modelling species distributions with small sample sizes. [source]


    Using species distribution models to identify suitable areas for biofuel feedstock production

    GCB BIOENERGY, Issue 2 2010
    JASON M. EVANS
    Abstract The 2007 Energy Independence and Security Act mandates a five-fold increase in US biofuel production by 2022. Given this ambitious policy target, there is a need for spatially explicit estimates of landscape suitability for growing biofuel feedstocks. We developed a suitability modeling approach for two major US biofuel crops, corn (Zea mays) and switchgrass (Panicum virgatum), based upon the use of two presence-only species distribution models (SDMs): maximum entropy (Maxent) and support vector machines (SVM). SDMs are commonly used for modeling animal and plant distributions in natural environments, but have rarely been used to develop landscape models for cultivated crops. AUC, Kappa, and correlation measures derived from test data indicate that SVM slightly outperformed Maxent in modeling US corn production, although both models produced significantly accurate results. When compared with results from a mechanistic switchgrass model recently developed by Oak Ridge National Laboratory (ORNL), SVM results showed higher correlation than Maxent results with models fit using county-scale point inputs of switchgrass production derived from expert opinion estimates. However, Maxent results for an alternative switchgrass model developed with point inputs from research trial sites showed higher correlation to the ORNL model than the corresponding results obtained from SVM. Further analysis indicates that both modeling approaches were effective in predicting county-scale increases in corn production from 2006 to 2007, a time period in which US corn production increased by 24%. We conclude that presence-only methods are a powerful first-cut tool for estimating relative land suitability across geographic regions in which candidate biofuel feedstocks can be grown, and may also provide important insight into potential land-use change patterns likely to be associated with increased biofuel demand. [source]


    Maximum entropy inference for mixed continuous-discrete variables

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2010
    Hermann Singer
    We represent knowledge by probability distributions of mixed continuous and discrete variables. From the joint distribution of all items, one can compute arbitrary conditional distributions, which may be used for prediction. However, in many cases only some marginal distributions, inverse probabilities, or moments are known. Under these conditions, a principle is needed to determine the full joint distribution of all variables. The principle of maximum entropy (Jaynes, Phys Rev 1957;106:620,630 and 1957;108:171,190; Jaynes, Probability Theory,The Logic of Science, Cambridge, UK: Cambridge University Press, 2003; Haken, Synergetics, Berlin: Springer-Verlag, 1977; Guiasu and Shenitzer, Math Intell 1985;117:83,106) ensures an unbiased estimation of the full multivariate relationships by using only known facts. For the case of discrete variables, the expert shell SPIRIT implements this approach (cf. Rödder, Artif Intell 2000;117:83,106; Rödder and Meyer, in Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence, San Francisco, CA, 2006; Rödder et al., Logical J IGPL 2006;14(3):483,500). In this paper, the approach is generalized to continuous and mixed continuous-discrete distributions and applied to the problem of credit scoring. © 2010 Wiley Periodicals, Inc. [source]


    An argument-dependent approach to determining OWA operator weights based on the rule of maximum entropy

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2007
    Jian Wu
    The methods for determining OWA operator weights have aroused wide attention. We first review the main existing methods for determining OWA operator weights. We next introduce the principle of maximum entropy for setting up probability distributions on the basis of partial knowledge and prove that Xu's normal distribution-based method obeys the principle of maximum entropy. Finally, we propose an argument-dependent approach based on normal distribution, which assigns very low weights to these "false" or "biased" opinions and can relieve the influence of the unfair arguments. A numerical example is provided to illustrate the application of the proposed approach. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 209,221, 2007. [source]


    The influence of spatial errors in species occurrence data used in distribution models

    JOURNAL OF APPLIED ECOLOGY, Issue 1 2008
    Catherine H Graham
    Summary 1Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error. [source]


    ORIGINAL ARTICLE: Predicting species distributions from small numbers of occurrence records: a test case using cryptic geckos in Madagascar

    JOURNAL OF BIOGEOGRAPHY, Issue 1 2007
    Richard G. Pearson
    Abstract Aim, Techniques that predict species potential distributions by combining observed occurrence records with environmental variables show much potential for application across a range of biogeographical analyses. Some of the most promising applications relate to species for which occurrence records are scarce, due to cryptic habits, locally restricted distributions or low sampling effort. However, the minimum sample sizes required to yield useful predictions remain difficult to determine. Here we developed and tested a novel jackknife validation approach to assess the ability to predict species occurrence when fewer than 25 occurrence records are available. Location, Madagascar. Methods, Models were developed and evaluated for 13 species of secretive leaf-tailed geckos (Uroplatus spp.) that are endemic to Madagascar, for which available sample sizes range from 4 to 23 occurrence localities (at 1 km2 grid resolution). Predictions were based on 20 environmental data layers and were generated using two modelling approaches: a method based on the principle of maximum entropy (Maxent) and a genetic algorithm (GARP). Results, We found high success rates and statistical significance in jackknife tests with sample sizes as low as five when the Maxent model was applied. Results for GARP at very low sample sizes (less than c. 10) were less good. When sample sizes were experimentally reduced for those species with the most records, variability among predictions using different combinations of localities demonstrated that models were greatly influenced by exactly which observations were included. Main conclusions, We emphasize that models developed using this approach with small sample sizes should be interpreted as identifying regions that have similar environmental conditions to where the species is known to occur, and not as predicting actual limits to the range of a species. The jackknife validation approach proposed here enables assessment of the predictive ability of models built using very small sample sizes, although use of this test with larger sample sizes may lead to overoptimistic estimates of predictive power. Our analyses demonstrate that geographical predictions developed from small numbers of occurrence records may be of great value, for example in targeting field surveys to accelerate the discovery of unknown populations and species. [source]


    A new maximum entropy-based method for deconvolution of spectra with heteroscedastic noise

    JOURNAL OF CHEMOMETRICS, Issue 12 2004
    Bård Buttingsrud
    Abstract Broadening of spectral lines combined with large and heteroscedastic noise contributions constitutes an important problem in analytical chemistry. Reduced interpretability and artefacts in further data analysis make deconvolution methods necessary. A new robust deconvolution method (RHEMEM) based on the principle of maximum entropy is proposed in order to effectively handle the presence of heteroscedastic noise. Other deconvolution methods such as Jansson's method, Fourier self-deconvolution and LOMEP are also studied with respect to their ability to handle heteroscedastic noise. A systematic simulation study is used to compare the performance of the new method with the reference methods. They are evaluated according to reconstruction performance, robustness and the ability to work without manual input. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    The common patterns of nature

    JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 8 2009
    S. A. FRANK
    Abstract We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. [source]


    Dialogue act recognition using maximum entropy

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2008
    Kwok Cheung Lan
    A dialogue-based interface for information systems is considered a potentially very useful approach to information access. A key step in computer processing of natural-language dialogues is dialogue-act (DA) recognition. In this paper, we apply a feature-based classification approach for DA recognition, by using the maximum entropy (ME) method to build a classifier for labeling utterances with DA tags. The ME method has the advantage that a large number of heterogeneous features can be flexibly combined in one classifier, which can facilitate feature selection. A unique characteristic of our approach is that it does not need to model the prior probability of DAs directly, and thus avoids the use of a discourse grammar. This simplifies the implementation of the classifier and improves the efficiency of DA recognition, without sacrificing the classification accuracy. We evaluate the classifier using a large data set based on the Switchboard corpus. Encouraging performance is observed; the highest classification accuracy achieved is 75.03%. We also propose a heuristic to address the problem of sparseness of the data set. This problem has resulted in poor classification accuracies of some DA types that have very low occurrence frequencies in the data set. Preliminary evaluation shows that the method is effective in improving the macroaverage classification accuracy of the ME classifier. [source]


    Spatial ecology of the European wildcat in a Mediterranean ecosystem: dealing with small radio-tracking datasets in species conservation

    JOURNAL OF ZOOLOGY, Issue 1 2009
    P. Monterroso
    Abstract Despite some populations of European wildcat Felis silvestris in central Europe are stable or increasing, the Iberian subpopulation is in decline and is listed as ,vulnerable'. In Portugal, little is known about wildcat populations, making conservation policies extremely difficult to define. Furthermore, the secretive behaviour of these mammals, along with low population densities, make data collection complicated. Thus, it is crucial to develop efficient analytical tools to interpret existing data for this species. In this study, we determine the home-range size and environmental factors related to wildcat spatial ecology in a Mediterranean ecosystem using a combined analysis of habitat selection and maximum entropy (Maxent) modelling. Simultaneously, we test the feasibility of using radio-tracking locations to construct an ecologically meaningful distribution model. Six wildcats were captured and tracked. The average home-range size (MCP95) was 2.28 km2 for females and 13.71 km2 for one male. The Maxent model built from radio-tracking locations indicated that the abundance of the European rabbit Oryctolagus cuniculus and limited human disturbance were the most important correlates of wildcat presence. Habitat selection analysis revealed that wildcats tend to use scrubland areas significantly more than expected by chance. A mosaic of scrublands and agricultural areas, with a higher proportion of the former, benefits wildcat presence in the study area; however, species distribution is mainly constrained by availability of prey and resting sites. The Maxent model validation with camera-trapping data indicated that highly adequate model performance. This technique may prove useful for recovering small radio-tracking datasets as it provides a new alternative for handling data and maximizing the ecological information on a target population, which can then be used for conservation planning. [source]


    Deterministic and statistical methods for reconstructing multidimensional NMR spectra,

    MAGNETIC RESONANCE IN CHEMISTRY, Issue 3 2006
    Ji Won Yoon
    Abstract Reconstruction of an image from a set of projections is a well-established science, successfully exploited in X-ray tomography and magnetic resonance imaging. This principle has been adapted to generate multidimensional NMR spectra, with the key difference that, instead of continuous density functions, high-resolution NMR spectra comprise discrete features, relatively sparsely distributed in space. For this reason, a reliable reconstruction can be made from a small number of projections. This speeds the measurements by orders of magnitude compared to the traditional methodology, which explores all evolution space on a Cartesian grid, one step at a time. Speed is of crucial importance for structural investigations of biomolecules such as proteins and for the investigation of time-dependent phenomena. Whereas the recording of a suitable set of projections is a straightforward process, the reconstruction stage can be more problematic. Several practical reconstruction schemes are explored. The deterministic methods,additive back-projection and the lowest-value algorithm,derive the multidimensional spectrum directly from the experimental projections. The statistical search methods include iterative least-squares fitting, maximum entropy, and model-fitting schemes based on Bayesian analysis, particularly the reversible-jump Markov chain Monte Carlo procedure. These competing reconstruction schemes are tested on a set of six projections derived from the three-dimensional 700-MHz HNCO spectrum of a 187-residue protein (HasA) and compared in terms of reliability, absence of artifacts, sensitivity to noise, and speed of computation. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    On testing predictions of species relative abundance from maximum entropy optimisation

    OIKOS, Issue 4 2010
    Stephen H. Roxburgh
    A randomisation test is described for assessing relative abundance predictions from the maximum entropy approach to biodiversity. The null model underlying the test randomly allocates observed abundances to species, but retains key aspects of the structure of the observed communities; site richness, species composition, and trait covariance. Three test statistics are used to explore different characteristics of the predictions. Two are based on pairwise comparisons between observed and predicted species abundances (RMSE, RMSESqrt). The third statistic is novel and is based on community-level abundance patterns, using an index calculated from the observed and predicted community entropies (EDiff). Validation of the test to quantify type I and type II error rates showed no evidence of bias or circularity, confirming the dependencies quantified by Roxburgh and Mokany (2007) and Shipley (2007) have been fully accounted for within the null model. Application of the test to the vineyard data of Shipley et al. (2006) and to an Australian grassland dataset indicated significant departures from the null model, suggesting the integration of species trait information within the maximum entropy framework can successfully predict species abundance patterns. The paper concludes with some general comments on the use of maximum entropy in ecology, including a discussion of the mathematics underlying the Maxent optimisation algorithm and its implementation, the role of absent species in generating biased predictions, and some comments on determining the most appropriate level of data aggregation for Maxent analysis. [source]


    Mechanisms in macroecology: AWOL or purloined letter?

    OIKOS, Issue 4 2010
    Towards a pragmatic view of mechanism
    Ecologists often believe the discovery of mechanism to be the central goal of scientific research. While many macroecologists have inherited this view, to date they have been much more efficient at producing patterns than identifying their underlying processes. We discuss several possible attitudes for macroecologists to adopt in this context while also arguing that in fact macroecology already has many mechanisms that are ignored. We briefly describe six of these: central limit theorem, fractals, random sampling and placement, neutral theory (and descendents), concordance of forces, and maximum entropy. We explore why these mechanisms are overlooked and discuss whether they should be. We conclude that macroecology needs to take a more pragmatic, less ideological approach to mechanism. We apply this viewpoint to the recent controversy over maximum entropy and suggest that maximum entropy needs to be viewed more pragmatically and less ideologically. [source]


    High-resolution reconstruction of a tracer dispersion event: application to ETEX

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 625 2007
    Marc Bocquet
    Abstract In a previous two-part paper, new methods for reconstructing the source of an atmospheric tracer at regional scale were developed. Specifically, the ,maximum entropy on the mean' (MEM) method was extended to large (though linear) data assimilation problems. Tests using twin experiments and a limited subset of the data from the European Tracer Experiment (ETEX) were performed. Although temporal reconstruction knowing the location of the source was satisfying, a full three-dimensional reconstruction with real data was still out of reach. In this paper, using the MEM method and some of its refinements, a reconstruction using all ETEX-I measurements at a resolution of 1.125 × 1.125 × 1 h is shown to be possible. This allows for a reconstruction of the full dispersion event. The MEM retrieval of the tracer plume using what is believed to be a good prior is then compared to retrievals using other priors, including Gaussian priors. Eventually, a reconstruction using all data sequentially in time (rather than all together) is obtained. This helps define what a maximum-entropy filter applied to sequential data assimilation of a linear tracer should be able to do, with a view to an efficient emergency response in case of an accidental release of pollutant. Copyright © 2007 Royal Meteorological Society [source]


    Reconstruction of an atmospheric tracer source using the principle of maximum entropy.

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 610 2005
    I: Theory
    Abstract Over recent years, tracing back sources of chemical species dispersed through the atmosphere has been of considerable importance, with an emphasis on increasing the precision of the source resolution. This need stems from many problems: being able to estimate the emissions of pollutants; spotting the source of radionuclides; evaluating diffuse gas fluxes; etc. We study the high-resolution retrieval on a continental scale of the source of a passive atmospheric tracer, given a set of concentration measurements. In the first of this two-part paper, we lay out and develop theoretical grounds for the reconstruction. Our approach is based on the principle of maximum entropy on the mean. It offers a general framework in which the information input prior to the inversion is used in a flexible and controlled way. The inversion is shown to be equivalent to the minimization of an optimal cost function, expressed in the dual space of observations. Examples of such cost functions are given for different priors of interest to the retrieval of an atmospheric tracer. In this respect, variational assimilation (4D-Var), as well as projection techniques, are obtained as biproducts of the method. The framework is enlarged to incorporate noisy data in the inversion scheme. Part II of this paper is devoted to the application and testing of these methods. Copyright © 2005 Royal Meteorological Society [source]


    Reconstruction of an atmospheric tracer source using the principle of maximum entropy.

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 610 2005
    II: Applications
    Abstract A new method of performing the source inversion of a passive tracer at continental scale was proposed in Part I of this two-part paper. The method made use of prior information, general or specific (depending on the situation), to perform a better reconstruction using only the prior information and the field measurements of the tracer. In this paper the method is tested on its first applications. It is used on several test examples, using the meteorological conditions of the European Joint Research Centre ETEX-I campaign. The retrieval of a temporal profile of emission from a source whose location is known is studied before testing the method on a full reconstruction of the space,time profile of the source. Synthetic, but also real-measurement, inversions are tested, thanks to the extension of the formalism to noisy data. Copyright © 2005 Royal Meteorological Society [source]


    Zeeman-Doppler imaging of late-type stars: The surface magnetic field of II Peg

    ASTRONOMISCHE NACHRICHTEN, Issue 10 2007
    T. A. Carroll
    Abstract Late-type stars in general possess complicated magnetic surface fields which makes their detection and in particular their modeling and reconstruction challenging. In this work we present a new Zeeman-Doppler imaging code which is especially designed for the application to late-type stars. This code uses a new multi-line cross-correlation technique by means of a principal component analysis to extract and enhance the quality of individual polarized line profiles. It implements the full polarized radiative transfer equation and uses an inversion strategy that can incorporate prior knowledge based on solar analogies. Moreover, our code utilizes a new regularization scheme which is based on local maximum entropy to allow a more appropriate reproduction of complex surface fields as those expected for late-type stars. In a first application we present Zeeman-Doppler images of II Pegasi which reveal a surprisingly large scale surface structure with one predominant (unipolar) magnetic longitude which is mainly radially oriented. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Direct methods and protein crystallography at low resolution

    ACTA CRYSTALLOGRAPHICA SECTION D, Issue 10 2000
    Christopher J. Gilmore
    The tools of modern direct methods are examined and their limitations for solving protein structures discussed. Direct methods need atomic resolution data (1.1,1.2,Å) for structures of around 1000 atoms if no heavy atom is present. For low-resolution data, alternative approaches are necessary and these include maximum entropy, symbolic addition, Sayre's equation, group scattering factors and electron microscopy. [source]