Home About us Contact | |||
Satisfactory Solution (satisfactory + solution)
Selected AbstractsRetrospective selection bias (or the benefit of hindsight)GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2001Francesco Mulargia SUMMARY The complexity of geophysical systems makes modelling them a formidable task, and in many cases research studies are still in the phenomenological stage. In earthquake physics, long timescales and the lack of any natural laboratory restrict research to retrospective analysis of data. Such ,fishing expedition' approaches lead to optimal selection of data, albeit not always consciously. This introduces significant biases, which are capable of falsely representing simple statistical fluctuations as significant anomalies requiring fundamental explanations. This paper identifies three different strategies for discriminating real issues from artefacts generated retrospectively. The first attempts to identify ab initio each optimal choice and account for it. Unfortunately, a satisfactory solution can only be achieved in particular cases. The second strategy acknowledges this difficulty as well as the unavoidable existence of bias, and classifies all ,anomalous' observations as artefacts unless their retrospective probability of occurrence is exceedingly low (for instance, beyond six standard deviations). However, such a strategy is also likely to reject some scientifically important anomalies. The third strategy relies on two separate steps with learning and validation performed on effectively independent sets of data. This approach appears to be preferable in the case of small samples, such as are frequently encountered in geophysics, but the requirement for forward validation implies long waiting times before credible conclusions can be reached. A practical application to pattern recognition, which is the prototype of retrospective ,fishing expeditions', is presented, illustrating that valid conclusions are hard to find. [source] Multicriteria group decision making under incomplete preference judgments: Using fuzzy logic with a linguistic quantifierINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2007Duke Hyun Choi In the face of increasing global competition and complexity of the socioeconomic environment, many organizations employ groups in decision making. Inexact or vague preferences have been discussed in the decision-making literature with a view to relaxing the burden of preference specifications imposed on the decision makers and thus taking into account the vagueness of human judgment. In this article, we present a multiperson decision-making method using fuzzy logic with a linguistic quantifier when each group member specifies incomplete judgment possibly both in terms of the evaluation of the performance of different alternatives with respect to multiple criteria and on the criteria themselves. Allowing for incomplete judgment in the model, however, makes a clear selection of the best alternative by the group more difficult. So, further interactions with the decision makers may proceed to the extent to compensate for the initial comfort of preference specifications. These interactions, however, may not guarantee the selection of the best alternative to implement. To circumvent this deadlock situation, we present a procedure for obtaining a satisfactory solution by the use of a linguistic-quantifier-guided aggregation that implies the fuzzy majority. This is an approach that combines a prescriptive decision method via mathematical programming and a well-established approximate solution method to aggregate multiple objects. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 641,660, 2007. [source] Psychometric reevaluation of the Women in Science Scale (WiSS)JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 10 2007Steven V. Owen The Women in Science Scale (WiSS) was first developed in 1984, and is still being used in contemporary studies, yet its psychometric properties have not been evaluated with current statistical methods. In this study, the WiSS was administered in its original 27-item form to 1,439 middle and high school students. Confirmatory factor analysis based upon the original description of the WiSS was modestly supportive of the proposed three-factor structure, but the claimed dimensions showed substantial redundancy. Therefore, we split our sample and performed exploratory factor analyses on one half. The most satisfactory solution, a two-factor model, was then applied to the crossvalidation sample with a confirmatory factor analysis. This two-factor structure was supported with a total of 14 items. Factor 1, Equality, contains eight items, and factor 2, Sexism, six items. Although our data are limited to adolescents, the WiSS, with improved psychometric properties, may be used descriptively to assess attitudes toward women in science and with additional stability and repeatability testing, may be used in evaluation research. The shortened WiSS should result in shorter administration time, fewer missing data, and increased acceptance among survey administrators in classroom settings. © 2007 Wiley Periodicals, Inc. J Res Sci Teach 44: 1461,1478, 2007 [source] The Logic of Good Social RelationsANNALS OF PUBLIC AND COOPERATIVE ECONOMICS, Issue 2 2000Serge-Christophe Kolm Good social relations have more or less an aspect of gift-giving which, by nature, can be neither bought nor imposed. Interaction in this respect will lead purely selfish people to an irremediably inferior state, while pure altruism and unconditional morality are very demanding on the ground of motivation. However, a satisfactory solution solely requires that an actor reciprocates the others' attitude, a much less demanding behaviour. Such reciprocity also fosters standard economic efficiency, and can be elicited by a number of widespread psychological features. [source] Crystallization and preliminary crystallographic analysis of maganese(II)-dependent 2,3-dihydroxybiphenyl 1,2-dioxygenase from Bacillus sp.ACTA CRYSTALLOGRAPHICA SECTION F (ELECTRONIC), Issue 3 2010A thermostable manganese(II)-dependent 2,3-dihydroxybiphenyl-1,2-dioxygenase derived from Bacillus sp. JF8 was crystallized. The initial screening for crystallization was performed by the sitting-drop vapour-diffusion method using a crystallization robot, resulting in the growth of two crystal forms. The first crystal belonged to space group P1, with unit-cell parameters a = 62.7, b = 71.4, c = 93.6,Å, , = 71.2, , = 81.0, , = 64.0°, and diffracted to 1.3,Å resolution. The second crystal belonged to space group I222, with unit-cell parameters a = 74.2, b = 90.8, c = 104.3,Å, and diffracted to 1.3,Å resolution. Molecular-replacement trials using homoprotocatechuate 2,3-dioxygenase from Arthrobacter globiformis (28% amino-acid sequence identity) as a search model provided a satisfactory solution for both crystal forms. [source] Crystallization and preliminary crystallographic analysis of gallate dioxygenase DesB from Sphingobium sp.ACTA CRYSTALLOGRAPHICA SECTION F (ELECTRONIC), Issue 11 2009SYK- Gallate dioxygenase (DesB) from Sphingobium sp. SYK-6, which belongs to the type II extradiol dioxygenase family, was purified and crystallized using the hanging-drop vapour-diffusion method. Two crystal forms were obtained. The form I crystal belonged to space group C2, with unit-cell parameters a = 136.2, b = 53.6, c = 55.1,Å, , = 112.8°, and diffracted to 1.6,Å resolution. The form II crystal belonged to space group P21, with unit-cell parameters a = 56.2, b = 64.7, c = 116.1,Å, , = 95.1°, and diffracted to 1.9,Å resolution. A molecular-replacement calculation using LigAB as a search model yielded a satisfactory solution for both crystal forms. [source] Finding starting points for Markov chain Monte Carlo analysis of genetic data from large and complex pedigreesGENETIC EPIDEMIOLOGY, Issue 1 2003Yuqun Luo Abstract Genetic data from founder populations are advantageous for studies of complex traits that are often plagued by the problem of genetic heterogeneity. However, the desire to analyze large and complex pedigrees that often arise from such populations, coupled with the need to handle many linked and highly polymorphic loci simultaneously, poses challenges to current standard approaches. A viable alternative to solving such problems is via Markov chain Monte Carlo (MCMC) procedures, where a Markov chain, defined on the state space of a latent variable (e.g., genotypic configuration or inheritance vector), is constructed. However, finding starting points for the Markov chains is a difficult problem when the pedigree is not single-locus peelable; methods proposed in the literature have not yielded completely satisfactory solutions. We propose a generalization of the heated Gibbs sampler with relaxed penetrances (HGRP) of Lin et al., ([1993] IMA J. Math. Appl. Med. Biol. 10:1,17) to search for starting points. HGRP guarantees that a starting point will be found if there is no error in the data, but the chain usually needs to be run for a long time if the pedigree is extremely large and complex. By introducing a forcing step, the current algorithm substantially reduces the state space, and hence effectively speeds up the process of finding a starting point. Our algorithm also has a built-in preprocessing procedure for Mendelian error detection. The algorithm has been applied to both simulated and real data on two large and complex Hutterite pedigrees under many settings, and good results are obtained. The algorithm has been implemented in a user-friendly package called START. Genet Epidemiol 25:14,24, 2003. © 2003 Wiley-Liss, Inc. [source] Multispecies conservation planning: identifying landscapes for the conservation of viable populations using local and continental species prioritiesJOURNAL OF APPLIED ECOLOGY, Issue 2 2007REGAN EARLY Summary 1Faced with unpredictable environmental change, conservation managers face the dual challenges of protecting species throughout their ranges and protecting areas where populations are most likely to persist in the long term. The former can be achieved by protecting locally rare species, to the potential detriment of protecting species where they are least endangered and most likely to survive in the long term. 2Using British butterflies as a model system, we compared the efficacy of two methods of identifying persistent areas of species' distributions: a single-species approach and a new multispecies prioritization tool called ZONATION. This tool identifies priority areas using population dynamic principles (prioritizing areas that contain concentrations of populations of each species) and the reserve selection principle of complementarity. 3ZONATION was generally able to identify the best landscapes for target (i.e. conservation priority) species. This ability was improved by assigning higher numerical weights to target species and implementing a clustering procedure to identify coherent biological management units. 4Weighting British species according to their European rather than UK status substantially increased the protection offered to species at risk throughout Europe. The representation of species that are rare or at risk in the UK, but not in Europe, was not greatly reduced when European weights were used, although some species of UK-only concern were no longer assigned protection inside their best landscapes. The analysis highlights potential consequences of implementing parochial vs. wider-world priorities within a region. 5Synthesis and applications. Wherever possible, reserve planning should incorporate an understanding of population processes to identify areas that are likely to support persistent populations. While the multispecies prioritization tool ZONATION compared favourably to the selection of ,best' areas for individual species, a user-defined input of species weights was required to produce satisfactory solutions for long-term conservation. Weighting species can allow international conservation priorities to be incorporated into regional action plans but the potential consequences of any putative solution should always be assessed to ensure that no individual species of local concern will be threatened. [source] H-methods in applied sciencesJOURNAL OF CHEMOMETRICS, Issue 3-4 2008Agnar Höskuldsson Abstract The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from ,nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H-methods is to build up the mathematical model in steps by using weighing schemes. Each weighing scheme produces a score and/or a loading vector that are expected to perform a certain task. Optimisation procedures are used to obtain ,the best' solution at each step. At each step, the optimisation is concerned with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how the principle leads to well-defined optimisation procedures. This is illustrated in the case of linear regression. The H-methods have been applied in different areas: general linear models, nonlinear models, multi-block methods, path modelling, multi-way data analysis, growth models, dynamic models and pattern recognition. Copyright © 2008 John Wiley & Sons, Ltd. [source] |