Goodness

Distribution by Scientific Domains
Distribution within Humanities and Social Sciences


Selected Abstracts


THE BLANK FACE OF LOVE: THE POSSIBILITY OF GOODNESS IN THE LITERARY AND PHILOSOPHICAL WORK OF IRIS MURDOCH1

MODERN THEOLOGY, Issue 2 2009
JENNIFER SPENCER GOODYER
This article explores the value of Iris Murdoch's metaphysical ethics for the theologian. Although, in many ways, Murdoch does appeal to the theologian, a subtle form of nihilism underlies her thought insofar as human goodness,in the form of loving attention,is only possible once the individual has overcome his/her ego by staring into the void and accepting the ultimate meaninglessness of reality. As this article demonstrates, Murdoch's replacement of transcendence with void rules out any form of real love or human goodness: only a dualistic exchange of gazes remains possible. Real, selfless love is only possible when the ego understands itself in the context of theological transcendence. [source]


THE NEUTRALITY OF RIGHTNESS AND THE INDEXICALITY OF GOODNESS: BEYOND OBJECTIVITY AND BACK AGAIN

RATIO, Issue 3 2008
Iskra Fileva
My purpose in the present paper is two-fold: to provide a theoretical framework for understanding the difference between rightness and virtue; and to systematically account for the role of objective rightness in an individual person's decision making. I argue that a decision to do something virtuous differs from a decision to do what's right not simply, as is often supposed, in being motivated differently but, rather, in being taken from a different point of view. My argument to that effect is the following. The ,objectively right' course of action must be right, ,neutrally' speaking, that is right for each of the participants in a given situation: if it is right for you to do A, then it cannot, at the same time, be right for me to prevent you from doing A. But the latter is precisely how things work with virtuous action: for instance, it may be virtuous of you to assume responsibility for my blunder, but it isn't virtuous of me to let you do so. I maintain, on this basis, that, while objectivity does have normative force in moral decision-making, the objective viewpoint is not, typically, the viewpoint from which decisions to act virtuously are taken. I then offer an account of objectivity's constraining power. [source]


PERFECT GOODNESS AND DIVINE FREEDOM

ANALYTIC PHILOSOPHY, Issue 3 2007
Edward Wierenga
First page of article [source]


Methodological rigour within a qualitative framework

JOURNAL OF ADVANCED NURSING, Issue 4 2004
Gerard A. Tobin BSc MSc RGN RMN RCNT RNT
Aim., This paper discusses the literature on establishing rigour in research studies. It describes the methodological trinity of reliability, validity and generalization and explores some of the issues relating to establishing rigour in naturalistic inquiry. Background., Those working within the naturalistic paradigm have questioned the issue of using validity, reliability and generalizability to demonstrate robustness of qualitative research. Triangulation has been used to demonstrate confirmability and completeness and has been one means of ensuring acceptability across paradigms. Emerging criteria such as goodness and trustworthiness can be used to evaluate the robustness of naturalistic inquiry. Discussion., It is argued that the transference of terms across paradigms is inappropriate; however, if we reject the concepts of validity and reliability, we reject the concept of rigour. Rejection of rigour undermines acceptance of qualitative research as a systematic process that can contribute to the advancement of knowledge. Emerging criteria for demonstrating robustness in qualitative inquiry, such as authenticity, trustworthiness and goodness, need to be considered. Goodness, when not seen as a separate construct but as an integral and embedded component of the research process, should be useful in assuring quality of the entire study. Triangulation is a tried and tested means of offering completeness, particularly in mixed-method research. When multiple types of triangulation are used appropriately as the ,triangulation state of mind', they approach the concept of crystallization, which allows for infinite variety of angles of approach. Conclusion., Qualitative researchers need to be explicit about how and why they choose specific legitimizing criteria in ensuring the robustness of their inquiries. A shift from a position of fundamentalism to a more pluralistic approach as a means of legitimizing naturalistic inquiry is advocated. [source]


Assessing Goodness of Fit of Item Response Theory Models: A Comparison of Traditional and Alternative Procedures

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 4 2003
Clement A. Stone
Testing the goodness of fit of item response theory (IRT) models is relevant to validating IRT models, and new procedures have been proposed. These alternatives compare observed and expected response frequencies conditional on observed total scores, and use posterior probabilities for responses across , levels rather than cross-classifying examinees using point estimates of , and score responses. This research compared these alternatives with regard to their methods, properties (Type 1 error rates and empirical power), available research, and practical issues (computational demands, treatment of missing data, effects of sample size and sparse data, and available computer programs). Different advantages and disadvantages related to these characteristics are discussed. A simulation study provided additional information about empirical power and Type 1 error rates. [source]


Foot's Natural Goodness and the Good of Nature

NEW BLACKFRIARS, Issue 973 2002
David Matzko McCarthy
First page of article [source]


Descartes on Freedom, Truth, and Goodness

NOUS, Issue 4 2009
Andrea Christofidou
First page of article [source]


Teaching & Learning Guide for: Moral Realism and Moral Nonnaturalism

PHILOSOPHY COMPASS (ELECTRONIC), Issue 3 2008
Stephen Finlay
Authors' Introduction Metaethics is a perennially popular subject, but one that can be challenging to study and teach. As it consists in an array of questions about ethics, it is really a mix of (at least) applied metaphysics, epistemology, philosophy of language, and mind. The seminal texts therefore arise out of, and often assume competence with, a variety of different literatures. It can be taught thematically, but this sample syllabus offers a dialectical approach, focused on metaphysical debate over moral realism, which spans the century of debate launched and framed by G. E. Moore's Principia Ethica. The territory and literature are, however, vast. So, this syllabus is highly selective. A thorough metaethics course might also include more topical examination of moral supervenience, moral motivation, moral epistemology, and the rational authority of morality. Authors Recommend: Alexander Miller, An Introduction to Contemporary Metaethics (Cambridge: Polity Press, 2003). This is one of the few clear, accessible, and comprehensive surveys of the subject, written by someone sympathetic with moral naturalism. David Brink, Moral Realism and the Foundations of Ethics (Cambridge: Cambridge University Press, 1989). Brink rehabilitates naturalism about moral facts by employing a causal semantics and natural kinds model of moral thought and discourse. Michael Smith, The Moral Problem (Oxford: Blackwell, 1994). Smith's book frames the debate as driven by a tension between the objectivity of morality and its practical role, offering a solution in terms of a response-dependent account of practical rationality. Gilbert Harman and Judith Jarvis Thomson, Moral Relativism & Moral Objectivity (Cambridge, MA: Blackwell, 1996). Harman argues against the objectivity of moral value, while Thomson defends it. Each then responds to the other. Frank Jackson, From Metaphysics to Ethics (Oxford: Clarendon Press, 1998). Jackson argues that reductive conceptual analysis is possible in ethics, offering a unique naturalistic account of moral properties and facts. Mark Timmons, Morality without Foundations (Oxford: Oxford University Press, 1999). Timmons distinguishes moral cognitivism from moral realism, interpreting moral judgments as beliefs that have cognitive content but do not describe moral reality. He also provides a particularly illuminating discussion of nonanalytic naturalism. Philippa Foot, Natural Goodness (New York, NY: Oxford University Press, 2001). A Neo-Aristotelian perspective: moral facts are natural facts about the proper functioning of human beings. Russ Shafer-Landau, Moral Realism: A Defence (New York, NY: Oxford University Press, 2003). In this recent defense of a Moorean, nonnaturalist position, Shafer-Landau engages rival positions in a remarkably thorough manner. Terence Cuneo, The Normative Web (New York, NY: Oxford University Press, 2007). Cuneo argues for a robust version of moral realism, developing a parity argument based on the similarities between epistemic and moral facts. Mark Schroeder, Slaves of the Passions (New York, NY: Oxford University Press, 2007). Schroeder defends a reductive form of naturalism in the tradition of Hume, identifying moral and normative facts with natural facts about agents' desires. Online Materials: PEA Soup: http://peasoup.typepad.com A blog devoted to philosophy, ethics, and academia. Its contributors include many active and prominent metaethicists, who regularly post about the moral realism and naturalism debates. Metaethics Bibliography: http://www.lenmanethicsbibliography.group.shef.ac.uk/Bib.htm Maintained by James Lenman, professor of philosophy at the University of Sheffield, this online resource provides a selective list of published research in metaethics. Stanford Encyclopedia of Philosophy: http://plato.stanford.edu See especially the entries under ,metaethics'. Sample Syllabus: Topics for Lecture & Discussion Note: unless indicated otherwise, all the readings are found in R. Shafer-Landau and T. Cuneo, eds., Foundations of Ethics: An Anthology (Malden: Blackwell, 2007). (FE) Week 1: Realism I (Classic Nonnaturalism) G. E. Moore, Principia Ethica, 2nd ed. (FE ch. 35). W. K. Frankena, ,The Naturalistic Fallacy,'Mind 48 (1939): 464,77. S. Finlay, ,Four Faces of Moral Realism', Philosophy Compass 2/6 (2007): 820,49 [DOI: 10.1111/j.1747-9991.2007.00100.x]. Week 2: Antirealism I (Classic Expressivism) A. J. Ayer, ,Critique of Ethics and Theology' (1952) (FE ch. 3). C. Stevenson, ,The Nature of Ethical Disagreement' (1963) (FE ch. 28). Week 3: Antirealism II (Error Theory) J. L. Mackie, ,The Subjectivity of Values' (1977) (FE ch. 1). R. Joyce, Excerpt from The Myth of Morality (2001) (FE ch. 2). Week 4: Realism II (Nonanalytic Naturalism) R. Boyd, ,How to be a Moral Realist' (1988) (FE ch. 13). P. Railton, ,Moral Realism' (1986) (FE ch. 14). T. Horgan and M. Timmons, ,New Wave Moral Realism Meets Moral Twin Earth' (1991) (FE ch. 38). Week 5: Antirealism III (Contemporary Expressivism) A. Gibbard, ,The Reasons of a Living Being' (2002) (FE ch. 6). S. Blackburn, ,How To Be an Ethical Anti-Realist' (1993) (FE ch. 4). T. Horgan and M. Timmons, ,Nondescriptivist Cognitivism' (2000) (FE ch. 5). W. Sinnott-Armstrong, ,Expressivism and Embedding' (2000) (FE ch. 37). Week 6: Realism III (Sensibility Theory) J. McDowell, ,Values and Secondary Qualities' (1985) (FE ch. 11). D. Wiggins, ,A Sensible Subjectivism' (1991) (FE ch. 12). Week 7: Realism IV (Subjectivism) & Antirealism IV (Constructivism) R. Firth, ,Ethical Absolutism and the Ideal Observer' (1952) (FE ch. 9). G. Harman, ,Moral Relativism Defended' (1975) (FE ch. 7). C. Korsgaard, ,The Authority of Reflection' (1996) (FE ch. 8). Week 8: Realism V (Contemporary Nonnaturalism) R. Shafer-Landau, ,Ethics as Philosophy' (2006) (FE ch. 16). T. M. Scanlon, What We Owe to Each Other (Cambridge, MA: Harvard University Press, 1998), ch. 1. T, Cuneo, ,Recent Faces of Moral Nonnaturalism', Philosophy Compass 2/6 (2007): 850,79 [DOI: 10.1111/j.1747-9991.2007.00102.x]. [source]


WHAT IS NATURAL ABOUT FOOT'S ETHICAL NATURALISM?

RATIO, Issue 3 2009
John Hacker-Wright
Philippa Foot's Natural Goodness is in the midst of a cool reception. It appears that this is due to the fact that Foot's naturalism draws on a picture of the biological world at odds with the view embraced by most scientists and philosophers. Foot's readers commonly assume that the account of the biological world that she must want to adhere to, and that she nevertheless mistakenly departs from, is the account offered by contemporary neo-Darwinian biological sciences. But as is evident in her notion of function, Foot does not employ an evolutionary view of the biological world. I will attempt to show, first, that it is for good reason that Foot is not operating with an evolutionary view of function; her views do not aim to unseat evolutionary views of function, but instead simply have quite different theoretical goals. Second, I aim to underline the importance to Foot's naturalism of the fact that we are practically reasoning creatures. The profundity of Foot's ethical naturalism rests in how she approaches our nature as practically reasoning creatures. In this aspect of Foot's thought, there is a significant Kantian strain that is surprising to find in someone who calls herself an ethical naturalist.1 [source]


,Thank Goodness That's Over': The Evolutionary Story

RATIO, Issue 3 2002
James Maclaurin
If, as the new tenseless theory of time maintains, there are no tensed facts, then why do our emotional lives seem to suggest that there are? This question originates with Prior's ,Thank Goodness That's Over' problem, and still presents a significant challenge to the new B,theory of time. We argue that this challenge has more dimensions to it than has been appreciated by those involved in the debate so far. We present an analysis of the challenge, showing the different questions that a B,theorist must answer in order to meet it. The debate has focused on the question of what is the object of my relief when an unpleasant experience is past. We outline the prevailing response to this question. The additional, and neglected, questions are, firstly ,,Why does the same event elicit different emotional responses from us depending on whether it is in the past, present, or future?' And secondly ,,Why do we care more about proximate future pain than about distant future pain?' We give B,theory answers to these questions, which appeal to evolutionary considerations. [source]


Does Integrity Require Moral Goodness?

RATIO, Issue 3 2001
Jody L. Graham
Most accounts of integrity agree that the person of integrity must have a relatively stable sense of who he is, what is important to him, and the ability to stand by what is most important to him in the face of pressure to do otherwise. But does integrity place any constraints on the kind of principles that the person of integrity stands for? In response to several recent accounts of integrity, I argue that it is not enough that a person stand for what he believes in, nor even that he is committed to and stands for what, in his best judgement, is morally right. In our web of moral concepts integrity is internally related to a host of virtues which exclude weakness of will and dogmatism, and presuppose trustworthiness. Integrity requires that the principles stood for must be those that a morally good, morally trustworthy agent would stand for, and that the agent himself is morally trustworthy. [source]


Augustine's Christian,Platonist Account of Goodness: A Reconsideration

THE HEYTHROP JOURNAL, Issue 3 2002
F.B.A. Asiedu
Augustine's metaphysics is a subject little studied, but often much criticized. Among the recent studies of Augustine's metaphysics, Scott MacDonald's interpretation of Augustine's notion of goodness claims that Augustine's account is incoherent. This suggests a reading of Augustine that is somewhat problematic. This article argues that much of the difficulty that MacDonald claims rests on a misunderstanding of Augustine's views about the goodness of creation and existence and the corruptibility of created things. Augustine's position takes for granted an understanding of existence (or being) as a good and the participation of all things in the pre,eminent good, that is God. [source]


The Attractions and Delights of Goodness

THE PHILOSOPHICAL QUARTERLY, Issue 216 2004
Jyl Gentzler
What makes something good for me? Most contemporary philosophers argue that something cannot count as good for me unless I am in some way attracted to it, or take delight in it. However, subjectivist theories of prudential value face difficulties, and there is no consensus about how these difficulties should be resolved. Whether one opts for a hedonist or a desire-satisfaction account of prudential value, certain fundamental assumptions about human well-being must be abandoned. I argue that we should reconsider Plato's objectivist theory of goodness as unity, or the One. This view is both consistent with and explains our most basic views both about goodness in general and human well-being in particular. [source]


Forecasting Models of Emergency Department Crowding

ACADEMIC EMERGENCY MEDICINE, Issue 4 2009
Lisa M. Schweigler MD
Abstract Objectives:, The authors investigated whether models using time series methods can generate accurate short-term forecasts of emergency department (ED) bed occupancy, using traditional historical averages models as comparison. Methods:, From July 2005 through June 2006, retrospective hourly ED bed occupancy values were collected from three tertiary care hospitals. Three models of ED bed occupancy were developed for each site: 1) hourly historical average, 2) seasonal autoregressive integrated moving average (ARIMA), and 3) sinusoidal with an autoregression (AR)-structured error term. Goodness of fits were compared using log likelihood and Akaike's Information Criterion (AIC). The accuracies of 4- and 12-hour forecasts were evaluated by comparing model forecasts to actual observed bed occupancy with root mean square (RMS) error. Sensitivity of prediction errors to model training time was evaluated, as well. Results:, The seasonal ARIMA outperformed the historical average in complexity adjusted goodness of fit (AIC). Both AR-based models had significantly better forecast accuracy for the 4- and the 12-hour forecasts of ED bed occupancy (analysis of variance [ANOVA] p < 0.01), compared to the historical average. The AR-based models did not differ significantly from each other in their performance. Model prediction errors did not show appreciable sensitivity to model training times greater than 7 days. Conclusions:, Both a sinusoidal model with AR-structured error term and a seasonal ARIMA model were found to robustly forecast ED bed occupancy 4 and 12 hours in advance at three different EDs, without needing data input beyond bed occupancy in the preceding hours. [source]


Applications of Extensions of Bivariate Rank Sum Statistics to the Crossover Design to Compare Two Treatments Through Four Sequence Groups

BIOMETRICS, Issue 3 2009
Atsushi Kawaguchi
Summary This article describes applications of extensions of bivariate rank sum statistics to the crossover design with four sequence groups for two treatments. A randomized clinical trial in ophthalmology provides motivating background for the discussion. The bilateral design for this study has four sequence groups T:T, T:P, P:T, and P:P, respectively, for T as test treatment or P as placebo in the corresponding order for the left and right eyes. This article describes how to use the average of the separate Wilcoxon rank sum statistics for the left and right eyes for the overall comparison between T and P with the correlation between the two eyes taken into account. An extension of this criterion with better sensitivity to potential differences between T and P through reduction of the applicable variance has discussion in terms of a conceptual model with constraints for within-side homogeneity of groups with the same treatment and between-side homogeneity of the differences between T and P. Goodness of fit for this model can have assessment with test statistics for its corresponding constraints. Simulation studies for the conceptual model confirm better power for the extended test statistic with its full invocation than other criteria without this property. The methods summarized here are illustrated for the motivating clinical trial in ophthalmology, but they are applicable to other situations with the crossover design with four sequence groups for either two locations for two treatments at the same time for a patient or two successive periods for the assigned treatments for a recurrent disorder. This article also notes that the methods based on its conceptual model can have unsatisfactory power for departures from that model where the difference between T and P via the T:T and P:P groups is not similar to that via the T:P and P:T groups, as might occur when T has a systemic effect in a bilateral trial. For this situation, more robust test statistics have identification, but there is recognition that the parallel groups design with only the T:T and P:P groups may be more useful than the bilateral design with four sequence groups. [source]


Using Image and Curve Registration for Measuring the Goodness of Fit of Spatial and Temporal Predictions

BIOMETRICS, Issue 4 2004
Cavan Reilly
Summary Conventional measures of model fit for indexed data (e.g., time series or spatial data) summarize errors in y, for instance by integrating (or summing) the squared difference between predicted and measured values over a range of x. We propose an approach which recognizes that errors can occur in the x -direction as well. Instead of just measuring the difference between the predictions and observations at each site (or time), we first "deform" the predictions, stretching or compressing along the x -direction or directions, so as to improve the agreement between the observations and the deformed predictions. Error is then summarized by (a) the amount of deformation in x, and (b) the remaining difference in y between the data and the deformed predictions (i.e., the residual error in y after the deformation). A parameter, ,, controls the tradeoff between (a) and (b), so that as ,,, no deformation is allowed, whereas for ,= 0 the deformation minimizes the errors in y. In some applications, the deformation itself is of interest because it characterizes the (temporal or spatial) structure of the errors. The optimal deformation can be computed by solving a system of nonlinear partial differential equations, or, for a unidimensional index, by using a dynamic programming algorithm. We illustrate the procedure with examples from nonlinear time series and fluid dynamics. [source]


Work Alienation and Organizational Leadership

BRITISH JOURNAL OF MANAGEMENT, Issue 4 2002
J. C. Sarros
This study examines the extent to which a leader's behaviour (i.e. transactional and transformational styles) and aspects of an organization's structure (i.e. centralization, formalization dimensions) directly and/or indirectly relate to elements of work alienation (i.e. powerlessness, meaninglessness, self estrangement). The study utilized structural equation modeling techniques to estimate the goodness of fit of a leadership,organizational structure,work alienation model based on the responses of personnel in a major US eastern seaboard fire department (a bureaucratic, quasi,military type organization) (n= 326). Goodness of fit statistics indicate good fit to the observed data. Results show that transformational leadership was associated with lower work alienation, whereas transactional leadership was associated with higher work alienation. Organizational structure was not significantly predictive of work alienation, but was negatively associated with transformational leadership and positively associated with transactional leadership. The significant indirect effects between organizational structure and work alienation, and between organizational structure and transformational leadership, provide further evidence that the leadership style of the organization has a more significant impact on feelings of work alienation than antecedent conditions such as organization rigidity. The study argues that managers as well as leaders need to question bureaucratic orientations to work and manager Ã,employee relations by rethinking their value orientations and adapting new models that encourage individual fulfilment, learning and personal development. [source]


On the analysis of non-linear allometries

ECOLOGICAL ENTOMOLOGY, Issue 1 2009
ROBERT J. KNELL
Abstract 1.,Non-linear allometries are those where a log,log scatterplot of trait size against body size deviates from simple linearity. These are found in many insects, including the horns of beetles, the forceps of earwigs, and the heads of certain castes of ant. 2.,Non-linear allometries are often associated with polyphenism that is itself related to behaviour: for example, the alternative mating tactics displayed by many species of beetle are widely associated with dimorphisms in horn size. 3.,This paper critically reviews the current techniques used to analyse these datasets. 4.,Recommendations include the use of scatterplots and assessment of the goodness of fit of simple linear models as an initial screen for non-linear allometry. The use of recently developed algorithms for ,segmented' regression to analyse continuous allometric relationships, and a pragmatic approach to the analysis of discontinuous relationships that recognises that there is no simple way to distinguish between morphs in some cases, and that all of the proposed methods for doing so have some drawbacks. 5.,Worked examples of the analysis of two sets of data from animals that have been the subject of controversy regarding the nature of their allometric relationships are given: further worked examples are provided as online Supporting Information. [source]


Patterns and causes of species richness: a general simulation model for macroecology

ECOLOGY LETTERS, Issue 9 2009
Nicholas J. Gotelli
Abstract Understanding the causes of spatial variation in species richness is a major research focus of biogeography and macroecology. Gridded environmental data and species richness maps have been used in increasingly sophisticated curve-fitting analyses, but these methods have not brought us much closer to a mechanistic understanding of the patterns. During the past two decades, macroecologists have successfully addressed technical problems posed by spatial autocorrelation, intercorrelation of predictor variables and non-linearity. However, curve-fitting approaches are problematic because most theoretical models in macroecology do not make quantitative predictions, and they do not incorporate interactions among multiple forces. As an alternative, we propose a mechanistic modelling approach. We describe computer simulation models of the stochastic origin, spread, and extinction of species' geographical ranges in an environmentally heterogeneous, gridded domain and describe progress to date regarding their implementation. The output from such a general simulation model (GSM) would, at a minimum, consist of the simulated distribution of species ranges on a map, yielding the predicted number of species in each grid cell of the domain. In contrast to curve-fitting analysis, simulation modelling explicitly incorporates the processes believed to be affecting the geographical ranges of species and generates a number of quantitative predictions that can be compared to empirical patterns. We describe three of the ,control knobs' for a GSM that specify simple rules for dispersal, evolutionary origins and environmental gradients. Binary combinations of different knob settings correspond to eight distinct simulation models, five of which are already represented in the literature of macroecology. The output from such a GSM will include the predicted species richness per grid cell, the range size frequency distribution, the simulated phylogeny and simulated geographical ranges of the component species, all of which can be compared to empirical patterns. Challenges to the development of the GSM include the measurement of goodness of fit (GOF) between observed data and model predictions, as well as the estimation, optimization and interpretation of the model parameters. The simulation approach offers new insights into the origin and maintenance of species richness patterns, and may provide a common framework for investigating the effects of contemporary climate, evolutionary history and geometric constraints on global biodiversity gradients. With further development, the GSM has the potential to provide a conceptual bridge between macroecology and historical biogeography. [source]


Gatty's Tale; or virtue restored

ENGLISH IN EDUCATION, Issue 1 2008
Vivienne Smith
Abstract Most children's books assume a moral framework in which their characters live and grow, but in most cases, morality remains extrinsic to the characters themselves: it is what happens to them and what they do, rather than what they believe and who they become. Kevin Crossley-Holland's novel Gatty's Tale, is unusual in that it presents a protagonist for whom being good matters for its own sake. This article explores Gatty's developing goodness, and shows how Crossley-Holland helps young readers understand what virtue is. [source]


Bacterial energetics, stoichiometry, and kinetic modeling of 2,4-Dinitrotoluene biodegradation in a batch respirometer

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2004
Chunlong Zhang
Abstract A stoichiometric equation and kinetic model were developed and validated using experimental data from batch respirometer studies on the biodegradation of 2,4-dinitrotoluene (DNT). The stoichiometric equation integrates bacterial energetics and is revised from that in a previous study by including the mass balance of phosphorus (P) in the biomass. Stoichiometric results on O2 consumption, CO2 evolution, and nitrite evolution are in good agreement with respirometer data. However, the optimal P requirement is significantly higher than the stoichiometrically derived P, implying potentially limited bioavailability of P and the need for buffering capacity in the media to mitigate the adverse pH effect for optimal growth of DNT-degrading bacteria. An array of models was evaluated to fit the O2/CO2 data acquired experimentally and the DNT depletion data calculated from derived stoichiometric coefficients and cell yield. The deterministic, integrated Monod model provides the goodness of fit to the test data on DNT depletion, and the Monod model parameters (Ks, X0, ,max, and Y) were estimated by nonlinear regression. Further analyses with an equilibrium model (MINTEQ) indicate the interrelated nature of medium chemical compositions in controlling the rate and extent of DNT biodegradation. Results from the present batch respirometer study help to unravel some key factors in controlling DNT biodegradation in complex remediation systems, in particular the interactions between acidogenic DNT bacteria and various parameters, including pH and P, the latter of which could serve as a nutrient, a buffer, and a controlling factor on the bioavailable fractions of minerals (Ca, Fe, Zn, and Mo) in the medium. [source]


Aspects of the Armitage,Doll gamma frailty model for cancer incidence data

ENVIRONMETRICS, Issue 3 2004
Shizue Izumi
Abstract Using solid cancer incidence data from atomic bomb survivors in Japan, we examine some aspects of the Armitage,Doll gamma frailty (ADF) model. We consider the following two interpretations for lack of fit of the Armitage,Doll multistage (AD) model found with cancer data: the AD type individual hazards are heterogeneous or the individual hazards increase more slowly with age than the AD type hazards. In order to examine these interpretations, we applied the ADF model and the modified AD model to radiation-related cancer incidence rates. We assessed the magnitude of frailty by a frailty parameter at the ADF model and departures from the AD-type baseline hazard by a shape increment parameter at the modified AD model. Akaike's information criterion (AIC) was used to examine the goodness of fit of the models. The modified AD model provided as good a fit as the ADF model. Our results support both interpretations and imply that these interpretations may be practically unidentifiable in univariate failure time data. Thus, results from the frailty model for univariate failure time data should be interpreted carefully. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo Approach

EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2008
Manuel Ammann
G12 Abstract We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact finite sample properties and accounts for errors-in-variables. Using S&P 500 panel data, we analyse the empirical performance of the CAPM and theFama and French (1993)three-factor model. We find that time-variation of betas in the CAPM and the time variation of the coefficients for the size factor (SMB) and the distress factor (HML) in the three-factor model improve the empirical performance. Therefore, our findings are consistent with time variation of firm-specific exposure to market risk, systematic credit risk and systematic size effects. However, a Bayesian model comparison trading off goodness of fit and model complexity indicates that the conditional CAPM performs best, followed by the conditional three-factor model, the unconditional CAPM, and the unconditional three-factor model. [source]


CBUF model II applied to exemplary NZ furniture (NZ-CBUF)

FIRE AND MATERIALS, Issue 3 2001
Patrick A. Enright
In the comprehensive EC-sponsored initiative, CBUF (combustion behaviour of upholstered furniture), three models were developed for furniture fire prediction. The second of these models, CBUF model II, is based on an area convolution technique with expressions of burning area over time determined for furniture types. In this paper, the CBUF model II was applied to a set of exemplary New Zealand (NZ) upholstered furniture items. CBUF Model II was not found to predict with goodness the combustion behaviour of the NZ upholstered furniture. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Combined effect of factors associated with burdens on primary caregiver

GERIATRICS & GERONTOLOGY INTERNATIONAL, Issue 2 2009
Hyuma Makizako
Background: It is argued that a multidimensional approach is necessary for burden assessment. Reducing caregiver burden is a social problem in the ageing Japan society. We examined the combined effect of factors affecting the care burden among community-dwelling handicapped people and their caregivers. Methods: The participants were 49 handicapped people (aged 53,104 years) who received home-visit rehabilitation, and their 49 caregivers (age 42,85 years). Caregivers were provided questionnaires consisting of questions on social support, subjective well-being, self-efficacy with regard to care continuation, the Motor Fitness Scale and caregiver burden. Care recipients were assessed using the Bedside Mobility Scale and the Barthel Index. Results: We prepared the hypothesis model using structural equation modeling with the bootstrap method within outcome measures. The hypothesis model did not fit the data well. The impact of the Motor Fitness Scale was shifted from the caregiver burden to care self-efficacy and well-being, having a cooperator for care and variable of spouse caregiver or others associated with caregiver well-being in the revised model. The fit of the revised model was acceptable (goodness of fit index, 0.903; comparative fit index, 0.998; root mean square error of approximation, 0.017). In the revised model, the care recipients' disabled state was associated with caregiver burden. In addition, higher burden and poor motor fitness of caregivers might lead to lower care self-efficacy in providing continuous care and lower caregiver well-being. Conclusion: These findings suggested that the program to reduce caregiver burden should focus on aspects of the care recipients' disabled state, the caregivers' well-being, fitness, and care self-efficacy. [source]


IHMS,Integrated Hydrological Modelling System.

HYDROLOGICAL PROCESSES, Issue 19 2010
Part 2.
Abstract The integrated hydrological modelling system, IHMS, has been described in detail in Part 1 of this paper. The system comprises three models: Distributed Catchment Scale Model (DiCaSM), MODFLOW (v96 and v2000) and SWI. The DiCaSM simulates different components of the unsaturated zone water balance, including groundwater recharge. The recharge output from DiCaSM is used as input to the saturated zone model MODFLOW, which subsequently calculates groundwater flows and head distributions. The main objectives of this paper are: (1) to show the way more accurate predictions of groundwater levels in two Cyprus catchments can be obtained using improved estimates of groundwater recharge from the catchment water balance, and (2) to demonstrate the interface utility that simulates communication between unsaturated and saturated zone models and allows the transmission of data between the two models at the required spatial and temporal scales. The linked models can be used to predict the impact of future climate change on surface and groundwater resources and to estimate the future water supply shortfall in the island up to 2050. The DiCaSM unsaturated zone model was successfully calibrated and validated against stream flows with reasonable values for goodness of fit as shown by the Nash-Sutcliffe criterion. Groundwater recharge obtained from the successful tests was applied at various spatial and temporal scales to the Kouris and Akrotiri catchments in Cyprus. These recharge values produced good estimates of groundwater levels in both catchments. Once calibrated, the model was run using a number of possible future climate change scenarios. The results showed that by 2050, groundwater and surface water supplies would decrease by 35% and 24% for Kouris and 20% and 17% for Akrotiri, respectively. The gap between water supply and demand showed a linear increase with time. The results suggest that IHMS can be used as an effective tool for water authorities and decision makers to help balance demand and supply on the island. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Impact of time-scale of the calibration objective function on the performance of watershed models

HYDROLOGICAL PROCESSES, Issue 25 2007
K. P. Sudheer
Abstract Many of the continuous watershed models perform all their computations on a daily time step, yet they are often calibrated at an annual or monthly time-scale that may not guarantee good simulation performance on a daily time step. The major objective of this paper is to evaluate the impact of the calibration time-scale on model predictive ability. This study considered the Soil and Water Assessment Tool for the analyses, and it has been calibrated at two time-scales, viz. monthly and daily for the War Eagle Creek watershed in the USA. The results demonstrate that the model's performance at the smaller time-scale (such as daily) cannot be ensured by calibrating them at a larger time-scale (such as monthly). It is observed that, even though the calibrated model possesses satisfactory ,goodness of fit' statistics, the simulation residuals failed to confirm the assumption of their homoscedasticity and independence. The results imply that evaluation of models should be conducted considering their behavior in various aspects of simulation, such as predictive uncertainty, hydrograph characteristics, ability to preserve statistical properties of the historic flow series, etc. The study enlightens the scope for improving/developing effective autocalibration procedures at the daily time step for watershed models. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A comparison of nearest neighbours, discriminant and logit models for auditing decisions

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1-2 2007
Chrysovalantis Gaganis
This study investigates the efficiency of k -nearest neighbours (k -NN) in developing models for estimating auditors' opinions, as opposed to models developed with discriminant and logit analyses. The sample consists of 5276 financial statements, out of which 980 received a qualified audit opinion, obtained from 1455 private and public UK companies operating in the manufacturing and trade sectors. We develop two industry-specific models and a general one using data from the period 1998,2001, which are then tested over the period 2002,2003. In each case, two versions of the models are developed. The first includes only financial variables. The second includes both financial and non-financial variables. The results indicate that the inclusion of credit rating in the models results in a considerable increase both in terms of goodness of fit and classification accuracies. The comparison of the methods reveals that the k -NN models can be more efficient, in terms of average classification accuracy, than the discriminant and logit models. Finally, the results are mixed concerning the development of industry-specific models, as opposed to general models. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Stable high-order finite-difference methods based on non-uniform grid point distributions

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2008
Miguel Hermanns
Abstract It is well known that high-order finite-difference methods may become unstable due to the presence of boundaries and the imposition of boundary conditions. For uniform grids, Gustafsson, Kreiss, and Sundström theory and the summation-by-parts method provide sufficient conditions for stability. For non-uniform grids, clustering of nodes close to the boundaries improves the stability of the resulting finite-difference operator. Several heuristic explanations exist for the goodness of the clustering, and attempts have been made to link it to the Runge phenomenon present in polynomial interpolations of high degree. By following the philosophy behind the Chebyshev polynomials, a non-uniform grid for piecewise polynomial interpolations of degree q,N is introduced in this paper, where N + 1 is the total number of grid nodes. It is shown that when q=N, this polynomial interpolation coincides with the Chebyshev interpolation, and the resulting finite-difference schemes are equivalent to Chebyshev collocation methods. Finally, test cases are run showing how stability and correct transient behaviours are achieved for any degree q[source]


Three sides of the same coin: measuring global cognitive impairment with the MMSE, ADAS-cog and CAMCOG

INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 8 2010
Hans Wouters
Abstract Objective The total scores of the ADAS-cog, MMSE and CAMCOG, comprising various cognitive tasks, are widely used to measure a dimension of global cognitive impairment. It is unknown, however, whether this dimension is common to these instruments. This hampers comparisons when either of these instruments is used. The extent to which these instruments share a common dimension of global cognitive impairment and how their scores relate was examined. Methods Rasch analysis of CAMCOG and MMSE data of participants from a population based study and two memory clinics pooled with ADAS-cog and MMSE data of participants from three RCTs (overall N,=,1566) to estimate a common dimension of global cognitive impairment and to examine the goodness of fit of the individual items to this dimension. Results Using the estimated common dimension of global cognitive impairment, the total scores of the instruments could be related, e.g. a mean level of global cognitive impairment corresponded to a predicted score of 11.4 (ADAS-cog), 72.6 (CAMCOG) and 22.2 (MMSE). When revised according to The Rasch validity analyses, every individual item could be fitted to the dimension. Conclusions The MMSE, ADAS-cog and CAMCOG reflect a valid common dimension of global cognitive impairment, which enables comparisons of RCTs that use the ADAS-cog and observational studies that use the CAMCOG and MMSE. Copyright © 2009 John Wiley & Sons, Ltd. [source]