Trivial

Distribution by Scientific Domains


Selected Abstracts


Pacemaker Lead Prolapse through the Pulmonary Valve in Children

PACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 10 2007
CHARLES I. BERUL M.D.
Background:Transvenous pacemaker leads in children are often placed with redundant lead length to allow for anticipated patient growth. This excess lead may rarely prolapse into the pulmonary artery and potentially interfere with valve function. We sought to determine the response to lead repositioning on pulmonary valve insufficiency. Methods:Retrospective reviews of demographics, lead type, implant duration, and radiography and echocardiography. Results:A total of 11 pediatric patients were identified with lead prolapse through the pulmonary valve, of which nine patients underwent procedures to retract and reposition the lead (age at implant 9 ± 4 years, age at revision 13 ± 4 years). The implant duration prior to revision was 4 ± 3 years. Two leads required radiofrequency extraction sheaths for removal, two pulled back using a snare, while five leads were simply retracted and repositioned. Tricuspid regurgitation was none/trivial (three), mild (four), or moderate (two) and only two improved with repositioning or replacement. Pulmonary regurgitation preoperatively was mild (three), mild-moderate (two), or moderate (four) compared with trivial (three), mild (four), and moderate (two) after revision. Patients with longer-term implanted leads had less improvement in pulmonary insufficiency. Two patients had mild pulmonary stenosis from lead-related obstruction. Conclusions:Prolapse of transvenous pacing leads into the pulmonary artery can occur when excess slack is left for growth. Leads can often be repositioned, but may require extraction and replacement, particularly if chronically implanted and adherent to valve apparatus. Lead revision does not always resolve pulmonary insufficiency, potentially leaving permanent valve damage. [source]


Role of Echocardiography in Assessing the Mechanism and Effect of Ramipril on Functional Mitral Regurgitation in Dilated Cardiomyopathy

ECHOCARDIOGRAPHY, Issue 4 2005
D.M. (Card), F.I.A.E., F.I.A.M.S., F.I.C.C., F.I.C.P., I.B. Vijayalakshmi M.D.
The objectives of this article are to determine the possible mechanism of functional mitral regurgitation in patients with dilated cardiomyopathy (DCM) and to know the effect of ramipril on left ventricle (LV) and mitral regurgitation by ECHO. Several postulates are put forth for functional mitral regurgitation in DCM, and mitral annular dilatation is said to be the primary mechanism in the past, but the exact mechanism is not clear. Though angiotensin converting enzyme (ACE) inhibitors are known to remodel the LV, their beneficial effect in patients with DCM with functional mitral regurgitation is not known. Various cardiac dimensions and degree of mitral regurgitation were measured by echocardiography in 30 normal control group and in 30 patients with DCM of various etiologies except ischemic, before and after ramipril therapy. There was a significant difference in all parameters especially sphericity of left ventricle and position of papillary muscles (P < 0.0003) in DCM patients, but mitral valve annulus did not show significant change (P < 0.3) compared to control group. In 50% of the patients, the functional mitral regurgitation totally disappeared. In 30% of patients, it came down from grade II to I or became trivial. In 20% of patients, it remained unchanged. There was remarkable improvement in sphericity, LV dimension, volumes, and EF%, which increased from 31 ± 9.81 to 39.3 ± 8.3% (P < 0.0003). It is concluded that echocardiography clearly demonstrates the increased sphericity of LV in DCM. The lateral migration of papillary muscles possibly plays a major role in functional mitral regurgitation. Ramipril significantly reduces not only sphericity but also functional mitral regurgitation. [source]


Currency boards: More than a quick fix?

ECONOMIC POLICY, Issue 31 2000
Atish R. Ghosh
Once a popular colonial monetary arrangement, currency boards fell into disuse as countries gained political independence. But recently, currency boards have made a remarkable come-back. This essay takes a critical look at their performance. Are currency boards really a panacea for achieving low inflation and high growth? Or do they merely provide a ,quick fix' allowing authorities to neglect fundamental reforms and thus fail to yield lasting benefits? We have three major findings. First, the historical track record of currency boards is sterling, with few instances of speculative attacks and virtually no ,involuntary' exits. Countries that did exit from currency boards did so mainly for political, rather than economic reasons, and such exits were usually uneventful. Second, modern currency boards have often been instituted to gain credibility following a period of high or hyperinflation, and in this regard, have been remarkably successful. Countries with currency boards experienced lower inflation and higher (if more volatile) GDP growth compared to both floating regimes and simple pegs. The inflation difference reflects both a lower growth rate of money supply (a ,discipline effect'), and a faster growth of money demand (a ,credibility effect'). The GDP growth effect is significant, but may simply reflect a rebound from depressed levels. Third, case studies reveal the successful introduction of a currency board to be far from trivial, requiring lengthy legal and institutional changes, as well as a broad economic and social consensus for the implied commitment. Moreover, there are thorny issues, as yet untested, regarding possible exits from a currency board. Thus currency boards do not provide easy solutions. But if introduced in the right circumstances, with some built-in flexibility, they can be an important tool for gaining credibility and achieving macroeconomic stabilization. [source]


Boredom, "Trouble," and the Realities of Postcolonial Reservation Life

ETHOS, Issue 1 2003
Assistant professor Lori L. Jervis
Perhaps because of its reputation as an inconsequential emotion, the significance of boredom in human social life has often been minimized if not ignored. Boredom has been theoretically linked to modernity, affluence, and the growing problem of filling "leisure time. "It has also been attributed to the expansion of individualism with its heightened expectations of personal gratification. Whether a reaction to the sensation ofunderstimulation or "overload," boredom appears to be, ultimately, a problem of meaning. In this article, we consider the applicability of these notions to the contemporary American Indian reservation context, examining discourse about boredom as expressed in interviews with members of a northern plains tribe. Of special interest is how boredom figures into the phenomenon of "trouble" (e.g., alcohol and drug abuse, violence, and illegal activities). Although boredom is certainly familiar to various strata of contemporary U.S. society,and arguably part of what it means to be human,we propose that the realities of postcolonial reservation life provide an especially fertile and undertheorized breeding ground for this condition, and our examination of the relationship between boredom and trouble suggests that boredom's implications for both individual subjectivity and group sociality are far from trivial. [source]


The Intermediate Band Solar Cell: Progress Toward the Realization of an Attractive Concept

ADVANCED MATERIALS, Issue 2 2010
Antonio Luque
Abstract The intermediate band (IB) solar cell has been proposed to increase the current of solar cells while at the same time preserving the output voltage in order to produce an efficiency that ideally is above the limit established by Shockley and Queisser in 1961. The concept is described and the present realizations and acquired understanding are explained. Quantum dots are used to make the cells but the efficiencies that have been achieved so far are not yet satisfactory. Possible ways to overcome the issues involved are depicted. Alternatively, and against early predictions, IB alloys have been prepared and cells that undoubtedly display the IB behavior have been fabricated, although their efficiency is still low. Full development of this concept is not trivial but it is expected that once the development of IB solar cells is fully mastered, IB solar cells should be able to operate in tandem in concentrators with very high efficiencies or as thin cells at low cost with efficiencies above the present ones. [source]


An assumed-gradient finite element method for the level set equation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2005
Hashem M. Mourad
Abstract The level set equation is a non-linear advection equation, and standard finite-element and finite-difference strategies typically employ spatial stabilization techniques to suppress spurious oscillations in the numerical solution. We recast the level set equation in a simpler form by assuming that the level set function remains a signed distance to the front/interface being captured. As with the original level set equation, the use of an extensional velocity helps maintain this signed-distance function. For some interface-evolution problems, this approach reduces the original level set equation to an ordinary differential equation that is almost trivial to solve. Further, we find that sufficient accuracy is available through a standard Galerkin formulation without any stabilization or discontinuity-capturing terms. Several numerical experiments are conducted to assess the ability of the proposed assumed-gradient level set method to capture the correct solution, particularly in the presence of discontinuities in the extensional velocity or level-set gradient. We examine the convergence properties of the method and its performance in problems where the simplified level set equation takes the form of a Hamilton,Jacobi equation with convex/non-convex Hamiltonian. Importantly, discretizations based on structured and unstructured finite-element meshes of bilinear quadrilateral and linear triangular elements are shown to perform equally well. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A high-order finite difference method for incompressible fluid turbulence simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2003
Eric Vedy
Abstract A Hermitian,Fourier numerical method for solving the Navier,Stokes equations with one non-homogeneous direction had been presented by Schiestel and Viazzo (Internat. J. Comput. Fluids 1995; 24(6):739). In the present paper, an extension of the method is devised for solving problems with two non-homogeneous directions. This extension is indeed not trivial since new algorithms will be necessary, in particular for pressure calculation. The method uses Hermitian finite differences in the non-periodic directions whereas Fourier pseudo-spectral developments are used in the remaining periodic direction. Pressure,velocity coupling is solved by a simplified Poisson equation for the pressure correction using direct method of solution that preserves Hermitian accuracy for pressure. The turbulent flow after a backward facing step has been used as a test case to show the capabilities of the method. The applications in view are mainly concerning the numerical simulation of turbulent and transitional flows. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Compositionality issues in discrete, continuous, and hybrid systems

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 5 2001
A. J. van der Schaft
Abstract Models of complex dynamical systems are often built by connecting submodels of smaller parts. The key to this method is the operation of ,interconnection' or ,composition' which serves to define the whole in terms of its parts. In the setting of smooth differential equations the composition operation has often been regarded as trivial, but a quite different attitude is found in the discrete domain where several definitions of composition have been proposed and different semantics have been developed. The non-triviality of composition carries over from discrete systems to hybrid systems. The paper discusses the compositionality issue in the context of discrete, continuous, and hybrid systems, mainly on the basis of a number of examples. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Faking on Personality Measures: Implications for selection involving multiple predictors

INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 1 2009
Patrick D. Converse
The potential for faking on noncognitive measures in high stakes testing situations remains a concern for many selection researchers and practitioners. However, the majority of previous research examining the practical effects of faking on noncognitive assessments has focused on these measures in isolation, rather than the more common situation in which they are used in combination with other predictors. The present simulation examined the effects of faking on a conscientiousness measure on criterion-related validity, mean performance of those selected, and selection decision consistency when hiring decisions were based on this measure alone vs in combination with two other predictors across a range of likely selection scenarios. Overall, results indicated that including additional predictors substantially reduced , but did not eliminate , the negative effects of faking. Faking effects varied across outcomes and selection scenarios, with effects ranging from trivial to noteworthy even for multiple-predictor selection. Implications for future research and practice are discussed. [source]


Backbone Diversity Analysis in Catalyst Design

ADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 3 2009

Abstract We present a computer-based heuristic framework for designing libraries of homogeneous catalysts. In this approach, a set of given bidentate ligand-metal complexes is disassembled into key substructures ("building blocks"). These include metal atoms, ligating groups, backbone groups, and residue groups. The computer then rearranges these building blocks into a new library of virtual catalysts. We then tackle the practical problem of choosing a diverse subset of catalysts from this library for actual synthesis and testing. This is not trivial, since ,catalyst diversity' itself is a vague concept. Thus, we first define and quantify this diversity as the difference between key structural parameters (descriptors) of the catalysts, for the specific reaction at hand. Subsequently, we propose a method for choosing diverse sets of catalysts based on catalyst backbone selection, using weighted D-optimal design. The computer selects catalysts with different backbones, where the difference is measured as a distance in the descriptors space. We show that choosing such a D-optimal subset of backbones gives more diversity than a simple random sampling. The results are demonstrated experimentally in the nickel-catalysed hydrocyanation of 3-pentenenitrile to adiponitrile. Finally, the connection between backbone diversity and catalyst diversity, and the implications towards in silico catalysis design are discussed. [source]


Adaptations for Nothing in Particular

JOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR, Issue 1 2004
Simon J. Hampton
An element of the contemporary dispute amongst evolution minded psychologists and social scientists hinges on the conception of mind as being adapted as opposed to adaptive. This dispute is not trivial. The possibility that human minds are both adapted and adaptive courtesy of selection pressures that were social in nature is of particular interest to a putative evolutionary social psychology. I suggest that the notion of an evolved psychological adaptation in social psychology can be retained only if it is accepted that this adaptation is for social interaction and has no rigidly fixed function and cannot be described in terms of algorithmic decision rules or fixed inferential procedures. What is held to be the reason for encephalisation in the Homo lineage and some of best atested ideas in social psychology offers license for such an approach. [source]


Advances in powder diffraction pattern indexing: N-TREOR09

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 5 2009
Angela Altomare
Powder pattern indexing can still be a challenge, despite the great recent advances in theoretical approaches, computer speed and experimental devices. More plausible unit cells, belonging to different crystal systems, are frequently found by the indexing programs, and recognition of the correct one may not be trivial. The task is, however, of extreme importance: in case of failure a lot of effort and computing time may be wasted. The classical figures of merit for estimating the unit-cell reliability {i.e.M20 [de Wolff (1968). J. Appl. Cryst.1, 108,113] and FN [Smith & Snyder (1979). J. Appl. Cryst.12, 60,65]} sometimes fail. For this reason, a new figure of merit has been introduced in N-TREOR09, the updated version of the indexing package N-TREOR [Altomare, Giacovazzo, Guagliardi, Moliterni, Rizzi & Werner (2000). J. Appl. Cryst. 33, 1180,1186], combining the information supplied by M20 with additional parameters such as the number of unindexed lines, the degree of overlap in the pattern (the so-called number of statistically independent observations), the symmetry deriving from the automatic evaluation of the extinction group, and the agreement between the calculated and observed profiles. The use of the new parameters requires a dramatic modification of the procedures used worldwide: in the approach presented here, extinction symbol and unit-cell determination are simultaneously estimated. N-TREOR09 benefits also from an improved indexing procedure in the triclinic system and has been integrated into EXPO2009, the updated version of EXPO2004 [Altomare, Caliandro, Camalli, Cuocci, Giacovazzo, Moliterni & Rizzi (2004). J. Appl. Cryst. 37, 1025,1028]. The application of the new procedure to a large set of test structures is described. [source]


The panbiogeograpy of hagfishes: a reply to Briggs's anachronistic criticism

JOURNAL OF BIOGEOGRAPHY, Issue 3 2009
Mauro J. Cavalcanti
Abstract Briggs's (2009) criticisms of Cavalcanti & Gallo's (2008) panbiogeographical study of hagfishes are shown to be either supportive of the criticized paper's main findings or trivial, arising from an incomplete understanding of the panbiogeographical method and synthesis and of methodological prejudice against this conceptual framework. [source]


From critical care to comfort care: the sustaining value of humour

JOURNAL OF CLINICAL NURSING, Issue 8 2008
Ruth Anne Kinsman Dean PhD
Aims and objectives., To identify commonalities in the findings of two research studies on humour in diverse settings to illustrate the value of humour in team work and patient care, despite differing contexts. Background., Humour research in health care commonly identifies the value of humour for enabling communication, fostering relationships, easing tension and managing emotions. Other studies identify situations involving serious discussion, life-threatening circumstances and high anxiety as places where humour may not be appropriate. Our research demonstrates that humour is significant even where such circumstances are common place. Method., Clinical ethnography was the method for both studies. Each researcher conducted observational fieldwork in the cultural context of a healthcare setting, writing extensive fieldnotes after each period of observation. Additional data sources were informal conversations with patients and families and semi-structured interviews with members of the healthcare team. Data analysis involved line-by-line analysis of transcripts and fieldnotes with identification of codes and eventual collapse into categories and overarching themes. Results., Common themes from both studies included the value of humour for team work, emotion management and maintaining human connections. Humour served to enable co-operation, relieve tensions, develop emotional flexibility and to ,humanise' the healthcare experience for both caregivers and recipients of care. Conclusions., Humour is often considered trivial or unprofessional; this research verifies that it is neither. The value of humour resides, not in its capacity to alter physical reality, but in its capacity for affective or psychological change which enhances the humanity of an experience, for both care providers and recipients of care. Relevance to clinical practice., In the present era which emphasises technology, efficiency and outcomes, humour is crucial for promoting team relationships and for maintaining the human dimension of health care. Nurses should not be reluctant to use humour as a part of compassionate and personalised care, even in critical situations. [source]


Justice and local community change: Towards a substantive theory of justice

JOURNAL OF COMMUNITY PSYCHOLOGY, Issue 6 2002
Neil M. Drew
Justice is a core principle in community psychology, yet has been the subject of relatively little systematic research. In the social psychological literature on the other hand there is a long tradition of research on justice in social life. In this article the potential benefits of integrating the social justice aspirations of community psychology and the conceptualizations of procedural and distributive justice from social psychology are discussed in the context of planned community change. The benefits of exploring justice in this way are illustrated with reference to a research project examining public perceptions of the fairness of roadside tree lopping. Although the issue may appear trivial, it was seen by the local residents as important. The results support the development, application, and utility of a social community psychology of justice to issues of community change. © 2002 Wiley Periodicals, Inc. [source]


Electronic structure, chemical bonding, and finite-temperature magnetic properties of full Heusler alloys

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 1 2006
Yasemin Kurtulus
Abstract The electronic structure, chemical bonding, and magnetic properties of 15 full Heusler alloys X2MnZ have been studied on the basis of density-functional theory using the TB-LMTO-ASA approach and the local-density (LDA), as well as the generalized-gradient approximation (GGA). Correlations between the chemical bondings derived from crystal orbital Hamilton population (COHP) analysis and magnetic phenomena are obvious, and different mechanisms leading to spin polarization and ferromagnetism are derived. As long as a magnetically active metal atom X is present, antibonding XX and XMn interactions at the Fermi level drive the systems into the ferromagnetic ground state; only if X is nonmagnetic (such as in Cu2MnZ), antibonding MnMn interactions arise, which again lead to ferromagnetism. Finite-temperature effects (Curie temperatures) are analyzed using a mean-field description, and a surprisingly simple (or, trivial) relationship between structural properties (MnMn interatomic distances) and TC is found, being of semiquantitative use for the prediction of the latter. © 2005 Wiley Periodicals, Inc. J Comput Chem 27: 90,102, 2006 [source]


What Do Corporate Default Rules and Menus Do?

JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 2 2009
An Empirical Examination
Much of corporate law consists of nonmandatory statutes. Although scholars have examined the effect of nonbinding corporate law from a theoretical perspective, only inconclusive event studies explore the real-world impact of these laws. This article empirically examines the impact of nonmandatory state anti-takeover statutes. Several conclusions emerge. Despite its nonbinding nature, corporate law makes an enormous difference in outcomes, contradicting those who claim that corporate law is trivial. Two types of nonmandatory corporate laws have particularly important effects. Corporate default laws that favor management are considerably less likely to be changed by companies than default laws favoring investors, supporting those who believe that corporate default laws can ameliorate asymmetries in incentives or bargaining power between managers and investors. Corporate "menu" laws,opt-in laws that are drafted by the state but do not apply as default rules,also facilitate the use of some provisions, supporting those who believe that nonmandatory corporate law reduces transaction costs, such as the cost of updating corporate charters to reflect developments in the economy. [source]


Figulla ASD Occluder versus Amplatzer Septal Occluder: A Comparative Study on Validation of a Novel Device for Percutaneous Closure of Atrial Septal Defects

JOURNAL OF INTERVENTIONAL CARDIOLOGY, Issue 6 2009
AYSENUR PAC M.D.
Objectives: Occlutech Figulla ASD Occluder (FSO) is an alternative device to Amplatzer Septal Occluder (ASO) with some structural innovations including increased flexibility, minimizing the amount of material implanted, and absence of the left atrial clamp. We aimed to report our experiences with FSO and compare the outcomes of this novel device versus ASO. Interventions: Between December 2005 and February 2009, 75 patients diagnosed with secundum atrial septal defects underwent transcatheter closure. The FSO device was used in 33 patients, and the ASO was used in 42. Results: Patient characteristics, stretch size of the defect, device left disc size, procedure, and fluoroscopy time were similar between the groups. However, the difference between device waist size and stretched diameter of the defect was significantly higher, and device delivery sheath was significantly larger in FSO group and device left disc size was significantly lower in the FSO group. In all subjects, the residual shunt was small to trivial during follow-up and the reduction in prevalence of residual shunt with time was similar in both groups (P = 0.68). We found no differences in complication rate between the two devices; however, device embolization to the pulmonary bifurcation in one patient was recorded as major complication in FSO device group. Conclusions: Both devices are clinically safe and effective in ASD closure. FSO device has similar outcomes when compared to ASO device. Difficulties in selecting the correct device size in larger defects and larger venous sheath requirement need to be evaluated in further studies. [source]


Control of the morphology and the size of complex coacervate microcapsules during scale-up

AICHE JOURNAL, Issue 6 2009
C. Y. G. Lemetter
Abstract Scale-up of complex coacervation, a fat encapsulation technology, is not trivial since the microcapsules morphology and size are highly affected by the processing conditions. So far it has been achieved empirically (trial and error approach). The goal of this study was to produce at various scale capsules with a single-oil droplet as the core material and small enough to be below sensory threshold. The turbulence level was identified as the main scale-up criterium and a master-curve could be drafted showing the capsule mean diameter as function of the Reynolds number, independent of the level of production scale. From a parent emulsion with specific oil droplets size (12,15 ,m), the Reynolds number had to be maintained above a critical value (15,000) to avoid capsules agglomeration with multiple oil cores and large particle sizes. To avoid aggregation, this turbulence level had to be kept until the temperature dropped below a critical value (14°C for a cooling rate of 35°C/2 h). Applying these learning led to a successful scale-up from bench (2 L) to a pilot plant scale of 50 L. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


Positioning of salt gradients in ion-exchange SMB

AICHE JOURNAL, Issue 3 2003
Joukje Houwing
Salt gradients can be used to improve the efficiency of ion-exchange separations in simulated moving-bed systems. The gradient, formed by the use of feed and desorbent solutions of different salt concentrations, introduces regions of increased and decreased affinity of, for example, proteins for the matrix. Several gradient shapes can be formed, depending on the flow-rate ratios and salt concentrations used. Only some of these effectively increase throughput or decrease desorbent consumption. Correct gradient positioning is essential, but not trivial, because salt is adsorbed in the resin. A procedure developed selects the flow-rate ratios that allow correct positioning of gradients based on wave theory and incorporates the nonlinear Donnan isotherm of salt on ion-exchange resins. Predictions are verified by experiments combined with a mathematical equilibrium stage (true moving-bed) model. Upward and downward gradients are compared with respect to the use of desorbent and salt. [source]


Xanthogenate nucleic acid isolation from cultured and environmental cyanobacteria

JOURNAL OF PHYCOLOGY, Issue 1 2000
Daniel Tillett
The isolation of high-quality nucleic acids from cyanobacterial strains, in particular environmental isolates, has proven far from trivial. We present novel techniques for the extraction of high molecular weight DNA and RNA from a range of cultured and environmental cyanobacteria, including stains belonging to the genera Microcystis, Lyngbya, Pseudanabaena, Aphanizomenon, Nodularia, Anabaena, and Nostoc, based on the use of the nontoxic polysaccharide solubilizing compound xanthogenate. These methods are rapid, require no enzymatic or mechanical cell disruption, and have been used to isolate both DNA and RNA free of enzyme inhibitors or nucleases. In addition, these procedures have proven critical in the molecular analysis of bloom-forming and other environmental cyanobacterial isolates. Finally, these techniques are of general microbiological utility for a diverse range of noncyanobacterial microorganisms, including Gram-positive and Gram-negative bacteria and the Archea. [source]


Strategic postures of political marketing: an exploratory operationalization,

JOURNAL OF PUBLIC AFFAIRS, Issue 1 2006
Stephan C. Henneberg
In contrast to most political marketing theories which imply that such concepts as ,voter-orientation' or ,voter-centric political management' are trivial and uni-dimensional, this article will take its starting point from an alternative perspective. It draws on the concept of political marketing ,postures', i.e. a multi-faceted conceptual entity, based on varied dimensions of political marketing orientations. The main duality consists of the constructs of ,leading' and ,following', with an auxiliary (and complementary) dimension of ,relationship building'. This article provides an exploratory methodology to operationalize this concept, which will also be initially tested empiricially, using expert judgements as well as electorate's perceptions. Changing postures will be exemplified within a longitudinal application of the concept to perceptions of Tony Blair as Prime Minister. Copyright © 2006 John Wiley & Sons, Ltd. [source]


OF EAGLES AND CROWS, LIONS AND OXEN: Blake and the Disruption of Ethics

JOURNAL OF RELIGIOUS ETHICS, Issue 1 2009
D. M. Yeager
ABSTRACT Why focus on the work of William Blake in a journal dedicated to religious ethics? The question is neither trivial nor rhetorical. Blake's work is certainly not in anyone's canon of significant texts for the study of Christian or, more broadly, religious ethics. Yet Blake, however subversive his views, sought to lay out a Christian vision of the good, alternated between prophetic denunciations of the world's folly and harrowing laments over the wreck of the world's promise, and wrote poetry as if poetry might mend the world. Setting imagination against the calculations of reason and the comfort of custom, Blake's poems inspire questions about the relationship of ethics to prophecy, and open the possibility that ethics itself would be markedly enriched could it find a place for what Thomas J. J. Altizer has called Christian epic poetry. [source]


Understanding software maintenance and evolution by analyzing individual changes: a literature review

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2009
Hans Christian Benestad
Abstract Understanding, managing and reducing costs and risks inherent in change are key challenges of software maintenance and evolution, addressed in empirical studies with many different research approaches. Change-based studies analyze data that describes the individual changes made to software systems. This approach can be effective in order to discover cost and risk factors that are hidden at more aggregated levels. However, it is not trivial to derive appropriate measures of individual changes for specific measurement goals. The purpose of this review is to improve change-based studies by (1) summarizing how attributes of changes have been measured to reach specific study goals and (2) describing current achievements and challenges, leading to a guide for future change-based studies. Thirty-four papers conformed to the inclusion criteria. Forty-three attributes of changes were identified, and classified according to a conceptual model developed for the purpose of this classification. The goal of each study was to either characterize the evolution process, to assess causal factors of cost and risk, or to predict costs and risks. Effective accumulation of knowledge across change-based studies requires precise definitions of attributes and measures of change. We recommend that new change-based studies base such definitions on the proposed conceptual model. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Automatic construction of accurate application call graph with library call abstraction for Java

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2007
Weilei Zhang
Abstract Call graphs are widely used to represent calling relationships among methods. However, there is not much interest in calling relationships among library methods in many software engineering applications, such as program understanding and testing, especially when the library is very big and the calling relationships are not trivial. This paper explores approaches for generating more accurate application call graphs for Java. A new data reachability algorithm is proposed and fine tuned to resolve library callbacks accurately. Compared with an algorithm that resolves library callbacks by traversing the whole-program call graph, the fine-tuned data reachability algorithm results in fewer spurious callback edges. In empirical studies, the new algorithm shows a significant reduction in the number of spurious callback edges. On the basis of the new algorithm, a library abstraction can be calculated automatically and applied in amortized slicing and dataflow testing. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Pair designing as practice for enforcing and diffusing design knowledge

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2005
Emilio Bellini
Abstract Evolving software's design requires that the members of the team acquire a deep and complete knowledge of the domain, the architectural components, and their integration. Such information is scarcely addressed within the design documentation and it is not trivial to derive it. A strategy for enforcing the consciousness of such hidden aspects of software's design is needed. One of the expected benefits of pair programming is fostering (tacit) knowledge building between the components of the pair and fastening its diffusion within the project's team. We have applied the paradigm of pair programming to the design phase and we have named it ,pair designing'. We have realized an experiment and a replica in order to understand if pair designing can be used as an effective means for diffusing and enforcing the design knowledge while evolving the system's design. The results suggest that pair designing could be a suitable means to disseminate and enforce design knowledge. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Bayesian measures of model complexity and fit

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2002
David J. Spiegelhalter
Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure pD for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general pD approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the ,hat' matrix projecting observations onto fitted values. Its properties in exponential families are explored. The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Adding pD to the posterior mean deviance gives a deviance information criterion for comparing models, which is related to other information criteria and has an approximate decision theoretic justification. The procedure is illustrated in some examples, and comparisons are drawn with alternative Bayesian and classical proposals. Throughout it is emphasized that the quantities required are trivial to compute in a Markov chain Monte Carlo analysis. [source]


A mathematical model of immune competition related to cancer dynamics

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 6 2010
Ilaria Brazzoli
Abstract This paper deals with the qualitative analysis of a model describing the competition among cell populations, each of them expressing a peculiar cooperating and organizing behavior. The mathematical framework in which the model has been developed is the kinetic theory for active particles. The main result of this paper is concerned with the analysis of the asymptotic behavior of the solutions. We prove that, if we are in the case when the only equilibrium solution if the trivial one, the system evolves in such a way that the immune system, after being activated, goes back toward a physiological situation while the tumor cells evolve as a sort of progressing travelling waves characterizing a typical equilibrium/latent situation. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Infima of universal energy functionals on homotopy classes

MATHEMATISCHE NACHRICHTEN, Issue 15 2006
Stefan Bechtluft-Sachs
Abstract We consider the infima (f) on homotopy classes of energy functionals E defined on smooth maps f: Mn , Vk between compact connected Riemannian manifolds. If M contains a sub-manifold L of codimension greater than the degree of E then (f) is determined by the homotopy class of the restriction of f to M \ L. Conversely if the infimum on a homotopy class of a functional of at least conformal degree vanishes then the map is trivial in homology of high degrees. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Homoplasy and mutation model at microsatellite loci and their consequences for population genetics analysis

MOLECULAR ECOLOGY, Issue 9 2002
Arnaud Estoup
Abstract Homoplasy has recently attracted the attention of population geneticists, as a consequence of the popularity of highly variable stepwise mutating markers such as microsatellites. Microsatellite alleles generally refer to DNA fragments of different size (electromorphs). Electromorphs are identical in state (i.e. have identical size), but are not necessarily identical by descent due to convergent mutation(s). Homoplasy occurring at microsatellites is thus referred to as size homoplasy. Using new analytical developments and computer simulations, we first evaluate the effect of the mutation rate, the mutation model, the effective population size and the time of divergence between populations on size homoplasy at the within and between population levels. We then review the few experimental studies that used various molecular techniques to detect size homoplasious events at some microsatellite loci. The relationship between this molecularly accessible size homoplasy size and the actual amount of size homoplasy is not trivial, the former being considerably influenced by the molecular structure of microsatellite core sequences. In a third section, we show that homoplasy at microsatellite electromorphs does not represent a significant problem for many types of population genetics analyses realized by molecular ecologists, the large amount of variability at microsatellite loci often compensating for their homoplasious evolution. The situations where size homoplasy may be more problematic involve high mutation rates and large population sizes together with strong allele size constraints. [source]