Different Data Sets (different + data_set)

Distribution by Scientific Domains


Selected Abstracts


Older persons' experiences of whole systems: the impact of health and social care organizational structures

JOURNAL OF NURSING MANAGEMENT, Issue 2 2008
BRENDAN McCORMACK BSc (Hons) Nursing, DPhil (Oxon)
Aim(s), An in-depth case study of whole systems working. Background, This paper reports on the second part of a two-part study exploring whole systems working. Part 1 of the study focused on an in-depth review of the literature pertaining to continuity of care and service integration. The second part, reported here, focused on an in-depth case study of one whole system. Evaluation, Informed by the findings of part 1 of the study, data collection methods included in-depth interviews, real-time tracking of 18 older people, focus groups and consensus conferencing. Different data sets were analysed individually and synthesized using matrices derived from the literature review findings. Key issue(s), Key themes from data synthesis include: (1) access to the most appropriate services; (2) service fragmentation; (3) continuity of care; and (4) routinized care. Conclusion(s), The four themes of the case study reflect the need to address issues of demarcation of professional responsibilities, complicated channels of communication, information flows, assessment and reassessment in whole systems working. Implications for nursing management, The impact of disempowering relationships on actual continuity of care and perceptions of quality among service users and providers. Lessons need to be learnt from specialist services and applied to service delivery in general. [source]


Development of a hydrometeorological forcing data set for global soil moisture estimation

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 13 2005
A. A Berg
Abstract Off-line land surface modeling simulations require accurate meteorological forcing with consistent spatial and temporal resolutions. Although reanalysis products present an attractive data source for these types of applications, bias to many of the reanalysis fields limits their use for hydrological modeling. In this study, we develop a global 0.5° forcing data sets for the time period 1979,1993 on a 6-hourly time step through application of a bias correction scheme to reanalysis products. We then use this forcing data to drive a land surface model for global estimation of soil moisture and other hydrological states and fluxes. The simulated soil moisture estimates are compared to in situ measurements, satellite observations and to a modeled data set of root zone soil moisture produced within a separate land surface model, using a different data set of hydrometeorological forcing. In general, there is good agreement between anomalies in modeled and observed (in situ) root zone soil moisture. Similarly, for the surface soil wetness state, modeled estimates and satellite observations are in general statistical agreement; however, correlations decline with increasing vegetation amount. Comparisons to a modeled data set of soil moisture also demonstrates that both simulations present estimates that are well correlated for the soil moisture in the anomaly time series, despite being derived from different land surface models, using different data sources for meteorological forcing, and with different specifications of the land surfaces properties. Copyright © 2005 Royal Meteorological Society [source]


A genetic algorithm approach to solving the anti-covering location problem

EXPERT SYSTEMS, Issue 5 2006
Sohail S. Chaudhry
Abstract: In this paper we address the problem of locating a maximum weighted number of facilities such that no two are within a specified distance from each other. A natural process of evolution approach, more specifically a genetic algorithm, is proposed to solve this problem. It is shown that through the use of a commercially available spreadsheet-based genetic algorithm software package, the decision-maker with a fundamental knowledge of spreadsheets can easily set up and solve this optimization problem. Also, we report on our extensive computational experience using three different data sets. [source]


The validation of some methods of notch fatigue analysis

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 5 2000
Taylor
This paper is concerned with the testing and validation of certain methods of notch analysis which the authors have developed theoretically in earlier publications. These methods were developed for use with finite element (FE) analysis in order to predict the fatigue limits of components containing stress concentrations. In the present work we tested and compared these methods using data from standard notches taken from the literature, covering a range of notch geometries, loading types, R -ratios and materials: a total of 47 different data sets were analysed. The greatest predictive success was achieved with critical-distance methods known as the point, line and area methods: 94% of these predictions fell within 20% of the experimental fatigue limits. This was a significant improvement on previous methods of this kind, e.g. that of Klesnil and Lucas [(1980) Fatigue of Metallic Materials, Elsevier Science]. Methods based on the Smith and Miller [(1978) Int. J. Mech. Sci. 20, 201,206] concept of crack-like notches were successful in 42% of cases; they experienced difficulties dealing with very small notches, and could be improved by using an ElHaddad-type correction factor, giving 87% success. An approach known as ,crack modelling' allowed the Smith and Miller method to be used with non-standard stress concentrations, where notch geometry is ill defined; this modification, with the same short-crack correction, had 68% success. It was concluded that the critical-distance approach is more accurate and can be more easily used to analyse components of complex shape, however, the crack modelling approach is sometimes preferable because it can be used with less mesh refinement. [source]


Are stock assessment methods too complicated?

FISH AND FISHERIES, Issue 3 2004
A J R Cotter
Abstract This critical review argues that several methods for the estimation and prediction of numbers-at-age, fishing mortality coefficients F, and recruitment for a stock of fish are too hard to explain to customers (the fishing industry, managers, etc.) and do not pay enough attention to weaknesses in the supporting data, assumptions and theory. The review is linked to North Sea demersal stocks. First, weaknesses in the various types of data used in North Sea assessments are summarized, i.e. total landings, discards, commercial and research vessel abundance indices, age-length keys and natural mortality (M). A list of features that an ideal assessment should have is put forward as a basis for comparing different methods. The importance of independence and weighting when combining different types of data in an assessment is stressed. Assessment methods considered are Virtual Population Analysis, ad hoc tuning, extended survivors analysis (XSA), year-class curves, catch-at-age modelling, and state-space models fitted by Kalman filter or Bayesian methods. Year-class curves (not to be confused with ,catch-curves') are the favoured method because of their applicability to data sets separately, their visual appeal, simple statistical basis, minimal assumptions, the availability of confidence limits, and the ease with which estimates can be combined from different data sets after separate analyses. They do not estimate absolute stock numbers or F but neither do other methods unless M is accurately known, as is seldom true. [source]


Modelling the hydraulic preferences of benthic macroinvertebrates in small European streams

FRESHWATER BIOLOGY, Issue 1 2007
SYLVAIN DOLÉDEC
Summary 1. Relating processes occurring at a local scale to the natural variability of ecosystems at a larger scale requires the design of predictive models both to orientate stream management and to predict the effects of larger scale disturbances such as climate changes. Our study contributes to this effort by providing detailed models of the hydraulic preferences of 151 invertebrate taxa, mostly identified at the species level. We used an extensive data set comprising 580 invertebrate samples collected using a Surber net from nine sites of second and third order streams during one, two or three surveys at each site. We used nested non-linear mixed models to relate taxon local densities to bed shear stresses estimated from FliesswasserStammTisch hemisphere numbers. 2. An average model by taxon, i.e. independent from surveys, globally explained 25% of the density variations of taxa within surveys. A quadratic relationship existed between the average preferences and the niche breadth of taxa, indicating that taxa preferring extreme hemisphere numbers had a reduced hydraulic niche breadth. A more complete model, where taxa preferences vary across surveys, globally explained 38% of the variation of taxa densities within surveys. Variations in preferences across surveys were weak for taxa preferring extreme hemisphere numbers. 3. There was a significant taxonomic effect on preferences computed from the complete model. By contrast, season, site, average hemisphere number within a survey and average density of taxa within a survey used as covariates did not consistently explain shifts in taxon hydraulic preferences across surveys. 4. The average hydraulic preferences of taxa obtained from the extensive data set were well correlated to those obtained from two additional independent data sets collected in other regions. The consistency of taxon preferences across regions supports the use of regional preference curves for estimating the impact of river management on invertebrate communities. By contrast, the hydraulic niche breadths of taxa computed from the different data sets were not related. [source]


The effects of different input data and their spatial resolution on the results obtained from a conceptual nutrient emissions model: the River Stör case study

HYDROLOGICAL PROCESSES, Issue 18 2005
Markus Venohr
Abstract This paper focuses on the influences of different data sources, and the variation in spatial resolution of input data and analysis, on the calculated nutrient emissions using the conceptual model MONERIS. MONERIS calculates both nitrogen and phosphorus emissions from point and diffuse sources and the riverine nutrient retention. By subtracting the retention from the emissions, a riverine nutrient load was estimated and compared with the observed nutrient river load. All calculations were conducted for the period from 1991 to 1993. The River Stör, with a catchment area of 1135 km2, located in a postglacial lowland landscape in northern Germany, was chosen as a case study area. Two different data sets (e.g. land use, soil type or wastewater treatment plant inventory) were used: a commonly available standard data set (German or European maps) and a more detailed set with a higher spatial resolution derived from several studies at the Ecosystem Research Centre in Kiel. Initially, both data sets were used to apply MONERIS to the total catchment. The results were compared to adapt some of the free model-parameters to the conditions in the relatively small lowland river catchment. Using the standard data set, total nutrient emissions of 2320 tons year,1 of nitrogen and 96 tons year,1 phosphorus were calculated. The detailed data set yielded slightly higher emissions for nitrogen (2420 tons year,1) and for phosphorus (102 tons year,1). According to the spatial resolution, the proportion of the area of tile drainages and sandy soils derived from the different data sets varies considerably. This causes great differences in the total nutrient emissions estimated by the two approaches. Comparing the observed and the calculated nutrient loads, reliable results for catchments larger than 50 km2, or third-order streams, could be shown. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Southern hemisphere cyclones and anticyclones: recent trends and links with decadal variability in the Pacific Ocean

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 11 2007
Dr Alexandre Bernardes Pezza
Abstract The aim of this paper is to study the association between the extratropical Southern Hemisphere and the decadal variability in the Pacific Ocean (PO). We discuss a pattern of coherent large-scale anomalies and trends in cyclone and anticyclone behaviour in light of the climate variability in the PO over the ERA40 reanalysis period (1957,2002). The two representative PO indices are the Pacific Decadal and Interdecadal Oscillations (PDO and IPO), and here the PDO is chosen owing to it being less associated with the southern oscillation index (SOI). Composites of the indicators of the density and intensity of cyclones/anticyclones given by an automatic tracking scheme were calculated for the years when the PDOI was more than one standard deviation above or below its mean. Although the ERA40 is not free from noise and assimilation changes, the results show a large-scale feature, which seems to be robust and agrees with earlier studies using different data sets. The sea-level pressure shows a strong annular structure related to the PDO, which is not seen for the SOI, with lower pressure around Antarctica during the positive phase and vice versa. More intense (and fewer) cyclones and anticyclones were observed during the positive PDO. This is less consistent for the SOI, particularly during the summer when a different PDO/SOI pattern arises at high latitudes. The trends project a pattern coincident with the positive PDO phase and seem to be linked with the main climate shift in the late seventies. Trends observed over the Tasman Sea are consistent with declining winter rainfall over southeastern Australia. Most patterns are statistically significant and seem robust, but random changes in ENSO may play a part, to a certain degree, in modulating the results, and a physical mechanism of causality has not been demonstrated. Although global warming and related changes in the Southern Annular Mode (SAM) may also help explain the observed behaviour, the large-scale response presented here provides a new insight and would be of considerable interest for further modelling studies. Copyright © 2007 Royal Meteorological Society [source]


Capital account liberalization and growth: was Mr. Mahathir right?

INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 3 2003
Barry Eichengreen
Abstract Much ink has been spilled over the connections between capital account liberalization and growth. One reason that previous studies have been inconclusive, we show, is their failure to account for the impact of crises on growth and for the capacity of controls to limit those disruptive output effects. Accounting for these influences, it appears that controls influence macroeconomic performance through two channels, directly (what we think of as their positive impact on resource allocation and efficiency) and indirectly (by limiting the disruptive effects of crises at home and abroad). Because these influences work in opposite directions, it is not surprising that previous studies, in failing to distinguish between them, have been unable to agree whether the effect of controls tilts one way or the other. And because vulnerability to crises varies across countries and with the structure and performance of the international financial system, it is not surprising that the effects of capital account liberalization on growth are contingent and context specific. We document these patterns using two entirely different data sets: a panel of historical data for 21 countries covering the period 1880,1997, and a wider panel for the post-1971 period like that employed in other recent studies. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Historical biogeography of Southeast Asia and the West Pacific, or the generality of unrooted area networks as historical biogeographic hypotheses

JOURNAL OF BIOGEOGRAPHY, Issue 2 2003
Peter C. van Welzen
Abstract Aim Unrooted area networks are perhaps a general way in which different historical biogeographical patterns may be combined. Location Southeast Asia up to the West Pacific, Australia, South America. Methods Unrooted area networks based on Primary Brooks Parsimony Analysis of different data sets of Southeast Asian,West Pacific, Australian and South American clades. Results A large Brooks Parsimony historical (cladistic) biogeographic analysis of Southeast Asia and the West Pacific gave a meaningful result when all clades (representing different historical biogeographic patterns) were united into one matrix and an unrooted area network was produced. This network showed geographically adjacent areas as neighbours, which is interpreted as clades dispersing and speciating as soon as areas rafted towards each other. This pseudo-vicariance mechanism, together with the very limited, mainly linear dispersal possibilities, a few large, widespread clades with many endemic species, and the large overlap in distributions displayed by different patterns, may explain the peculiar result. When applied to examples from other areas (bird data from Australia and South America), unrooted area networks for all data perform very poorly. Main conclusions Unrooted historical general area networks are not universally applicable. In general, it is better to split historical patterns a priori and analyse them separately. [source]


A model for the species,area,habitat relationship

JOURNAL OF BIOGEOGRAPHY, Issue 1 2003
K. A. Triantis
Abstract Aim, To propose a model (the choros model) for species diversity, which embodies number of species, area and habitat diversity and mathematically unifies area per se and habitat hypotheses. Location, Species richness patterns from a broad scale of insular biotas, both from island and mainland ecosystems are analysed. Methods, Twenty-two different data sets from seventeen studies were examined in this work. The r2 values and the Akaike's Information Criterion (AIC) were used in order to compare the quality of fit of the choros model with the Arrhenius species,area model. The classic method of log-log transformation was applied. Results, In twenty of the twenty-two cases studied, the proposed model gave a better fit than the classic species,area model. The values of z parameter derived from choros model are generally lower than those derived from the classic species,area equation. Main conclusions, The choros model can express the effects of area and habitat diversity on species richness, unifying area per se and the habitat hypothesis, which as many authors have noticed are not mutually exclusive but mutually supplementary. The use of habitat diversity depends on the specific determination of the ,habitat' term, which has to be defined based on the natural history of the taxon studied. Although the values of the z parameter are reduced, they maintain their biological significance as described by many authors in the last decades. The proposed model can also be considered as a stepping-stone in our understanding of the small island effect. [source]


Fast principal component analysis of large data sets based on information extraction

JOURNAL OF CHEMOMETRICS, Issue 11 2002
F. Vogt
Abstract Principal component analysis (PCA) and principal component regression (PCR) are routinely used for calibration of measurement devices and for data evaluation. However, their use is hindered in some applications, e.g. hyperspectral imaging, by excessive data sets that imply unacceptable calculation time. This paper discusses a fast PCA achieved by a combination of data compression based on a wavelet transformation and a spectrum selection method prior to the PCA itself. The spectrum selection step can also be applied without previous data compression. The calculation speed increase is investigated based on original and compressed data sets, both simulated and measured. Two different data sets are used for assessment of the new approach. One set contains 65,536 synthetically generated spectra at four different noise levels with 256 measurement points each. Compared with the conventional PCA approach, these examples can be accelerated 20 times. Evaluation errors of the fast method were calculated and found to be comparable with those of the conventional approach. Four experimental spectra sets of similar size are also investigated. The novel method outperforms PCA in speed by factors of up to 12, depending on the data set. The principal components obtained by the novel algorithm show the same ability to model the measured spectra as the conventional time-consuming method. The acceleration factors also depend on the possible compression; in particular, if only a small compression is feasible, the acceleration lies purely with the novel spectrum selection step proposed in this paper. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Development and psychometric properties of the Individualized Care Scale

JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2005
Riitta Suhonen PhD RN
Abstract Rationale, aims and objectives, In this study we describe the development of the Individualized Care Scale (ICS) and evaluate its validity, psychometric properties and feasibility. The ICS was designed to measure patients' views on how individuality is supported through specific nursing interventions (ICA) and how they perceive individuality in their own care (ICB) during hospitalization. Method, Three different data sets were collected among patients being discharged from hospital (n1 = 203, n2 = 279, n3 = 454). This bipartite 38-item ICS promises to be a brief, timely, easy to administer and useful self-completion measure for evaluating clinical nursing practice from the patient's point of view. Results, The findings supported the internal consistency reliability of the ICS (alpha 0.94 for ICA and ICB 0.93) and the three subscales (alphas 0.85,0.90). Item analysis supported the item construction of each scale. Content validity was furthered by a critical literature review and four expert analyses. Principal component analysis (Promax with Kaiser normalization) among earlier factor analyses supported construct validity by generating a three-factor solution which accounted for 65% of the variance in the ICA and 61% in the ICB. Pearson's correlation coefficients were at least 0.88 between the subscales and the total domain ICA or ICB. Conclusions, The ICS has demonstrated promise as a tool for measuring patients' evaluations of their hospital experience and individuality in care. [source]


THE RELIABILITY OF NAÏVE ASSESSORS IN SENSORY EVALUATION VISUALIZED BY PRAGMATICAL MULTIVARIATE ANALYSIS

JOURNAL OF FOOD QUALITY, Issue 5 2002
M.G. O'SULLIVAN
The first part of this paper demonstrates a simple graphical way to visualize estimated variances, in terms of a plot of the total initial variance ("SIGNAL") versus residual variance ("NOISE"), as a pragmatic alternative to tables of F-tests. The recently developed Procrustes rotation in the bilinear "jack-knifing" form is then presented as a method for simplifying the comparison of PLS Regression models from different data sets. These methods are applied to sensory data in order to study if naïve (untrained) sensory panelists can produce reliable descriptions of systematic differences between various test meals. The results confirm that three panels of 15 naïve assessors each could give repeatable intersubjective description of the most dominant sensory variation dimensions. [source]


Productivity in Malagasy rice systems: wealth-differentiated constraints and priorities

AGRICULTURAL ECONOMICS, Issue 2007
Bart Minten
rice productivity; poverty; technology adoption; Madagascar Abstract This study explores the constraints on agricultural productivity and priorities in boosting productivity in rice, the main staple in Madagascar, using a range of different data sets and analytical methods, integrating qualitative assessments by farmers and quantitative evidence from panel data production function analysis and willingness-to-pay estimates for chemical fertilizer. Nationwide, farmers seek primarily labor productivity enhancing interventions, e.g., improved access to agricultural equipment, cattle, and irrigation. Shock mitigation measures, land productivity increasing technologies, and improved land tenure are reported to be much less important. Research and interventions aimed at reducing costs and price volatility within the fertilizer supply chain might help at least the more accessible regions to more readily adopt chemical fertilizer. [source]


EFFECT OF TAXON SAMPLING, CHARACTER WEIGHTING, AND COMBINED DATA ON THE INTERPRETATION OF RELATIONSHIPS AMONG THE HETEROKONT ALGAE,

JOURNAL OF PHYCOLOGY, Issue 2 2003
Leslie R. Goertzen
Nuclear ribosomal small subunit and chloroplast rbcL sequence data for heterokont algae and potential outgroup taxa were analyzed separately and together using maximum parsimony. A series of taxon sampling and character weighting experiments was performed. Traditional classes (e.g. diatoms, Phaeophyceae, etc.) were monophyletic in most analyses of either data set and in analyses of combined data. Relationships among classes and of heterokont algae to outgroup taxa were sensitive to taxon sampling. Bootstrap (BS) values were not always predictive of stability of nodes in taxon sampling experiments or between analyses of different data sets. Reweighting sites by the rescaled consistency index artificially inflates BS values in the analysis of rbcL data. Inclusion of the third codon position from rbcL enhanced signal despite the superficial appearance of mutational saturation. Incongruence between data sets was largely due to placement of a few problematic taxa, and so data were combined. BS values for the combined analysis were much higher than for analyses of each data set alone, although combining data did not improve support for heterokont monophyly. [source]


Anomalous variations in low-degree helioseismic mode frequencies

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2006
R. Howe
ABSTRACT We compare changes in the frequencies of solar acoustic modes with degree between 0 and 2, as derived from Global Oscillation Network Group (GONG), Birmingham Solar Oscillations Network (BiSON) and Michelson Doppler Imager (MDI) spectra obtained between 1995 and 2003. We find that, after the solar-activity dependence has been removed from the frequencies, there remain variations that appear to be significant, and are often well correlated between the different data sets. We consider possible explanations for these fluctuations, and conclude that they are likely to be related to the stochastic excitation of the modes. The existence of such fluctuations has possible relevance to the analysis of other low-degree acoustic mode spectra such as those from solar-type stars. [source]


Bayesian estimation of cognitive decline in patients with alzheimer's disease

THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2002
Patrick Béalisle
Abstract Recently, there has been great interest in estimating the decline in cognitive ability in patients with Alzheimer's disease. Measuring decline is not straightforward, since one must consider the choice of scale to measure cognitive ability, possible floor and ceiling effects, between-patient variability, and the unobserved age of onset. The authors demonstrate how to account for the above features by modeling decline in scores on the Mini-Mental State Exam in two different data sets. To this end, they use hierarchical Bayesian models with change points, for which posterior distributions are calculated using the Gibbs sampler. They make comparisons between several such models using both prior and posterior Bayes factors, and compare the results from the models suggested by these two model selection criteria. Estimation bayésienne du déclin cognitif de patients atteints de la maladie d'Alzheimer On s'est beaucoup intéressé ces derniers temps à l'estimation du déclin des fonctions cognitives des personnes atteintes de la maladie d'Alzheimer. Il n'est pas facile de quantifier ce déclin, qui dépend de l'échelle utilisée pour mesurer les fonctions cognitives, mais aussi de la variabilité entre les individus, de 1'incertitude entourant le moment exact du début de leur maladie et d'éventuels effets plancher et plafond. Les auteurs montrent comment il est possible de tenir compte de ces différents éléments en modélisant le déclin observé dans les résultats obtenus par deux groupes de patients au mini-examen de l'état mental. Ils utilisent pour ce faire des modèles bayésiens hiérarchiques avec points de jonction, pour lesquels ils calculent les lois a posteriori au moyen de l'échantillonneur de Gibbs. Ils comparent plusieurs modèles de ce type au moyen de facteurs de Bayes a priori et a posteriori; ils comparent ensuite les résultats des modèles suggérés par ces deux critères de sélection. [source]


Integrating geo-referenced multiscale and multidisciplinary data for the management of biodiversity in livestock genetic resources

ANIMAL GENETICS, Issue 2010
S. Joost
Summary In livestock genetic resource conservation, decision making about conservation priorities is based on the simultaneous analysis of several different criteria that may contribute to long-term sustainable breeding conditions, such as genetic and demographic characteristics, environmental conditions, and role of the breed in the local or regional economy. Here we address methods to integrate different data sets and highlight problems related to interdisciplinary comparisons. Data integration is based on the use of geographic coordinates and Geographic Information Systems (GIS). In addition to technical problems related to projection systems, GIS have to face the challenging issue of the non homogeneous scale of their data sets. We give examples of the successful use of GIS for data integration and examine the risk of obtaining biased results when integrating datasets that have been captured at different scales. [source]


Hovenkamp's ostracized vicariance analysis: testing new methods of historical biogeography

CLADISTICS, Issue 4 2008
Article first published online: 7 DEC 200, Simone Fattorini
All methods currently employed in cladistic biogeography usually give contrasting results and are theoretically disputed. In two overlooked papers, Hovenkamp (1997, 2001) strongly criticized methods currently used by biogeographers and proposed two other methods. However, his criticisms have remained unanswered and his methods rarely applied. I used three different data sets to show the superiority of Hovenkamp's methods. Both methods proposed by Hovenkamp do not suffer from the unrealistic assumptions that underlie other methods commonly used in cladistic biogeography. The method proposed in 2001 is more powerful than the previous method published in 1997, because it does not use a priori assumptions about the areas involved. However, the method proposed in 1997 may be a valid alternative for large data sets. © The Willi Hennig Society 2007. [source]


Morphology versus molecules: resolution of the positions of Nymphalis, Polygonia, and related genera (Lepidoptera: Nymphalidae)

CLADISTICS, Issue 3 2003
Niklas Wahlberg
The debate on whether to combine different data sets for simultaneous analysis has continued to the present day unabated. We have studied the effects of combining one morphological data set with four molecular data sets (two mitochondrial gene sequences and two nuclear gene sequences) for a group of butterflies belonging to the tribe Nymphalini using partitioned Bremer support. We particularly focus our attention on a group of species belonging to the genera Aglais, Inachis, Roddia, Nymphalis, Kaniska, and Polygonia. We find that, despite significant incongruence between most data partitions, all data partitions contribute positively to the support of most nodes in the most parsimonious trees found for the combined data set. We also find that the morphological data set resolves one particular node (Kaniska basal to Polygonia) with good support, while the molecular data sets are ambiguous about the existence of this node. We suggest that partitioned Bremer support allows one to critically appraise the robustness of each node in a given tree and thereby identify nodes that may change with the addition of new data and nodes that are likely to remain unchanged with new data. We also suggest that morphological data are still crucial to our being able to understand the relationships of extant organisms, despite published views to the contrary. Based on our results we suggest that Inachis should be synonymized with Aglais, Roddia with Nymphalis, and Kaniska with Polygonia. [source]


Resolving the paradox of the active user: stable suboptimal performance in interactive tasks

COGNITIVE SCIENCE - A MULTIDISCIPLINARY JOURNAL, Issue 6 2004
Wai-Tat Fu
Abstract This paper brings the intellectual tools of cognitive science to bear on resolving the "paradox of the active user" [Interfacing Thought: Cognitive Aspects of Human,Computer Interaction, Cambridge, MIT Press, MA, USA],the persistent use of inefficient procedures in interactive tasks by experienced or even expert users when demonstrably more efficient procedures exist. The goal of this paper is to understand the roots of this paradox by finding regularities in these inefficient procedures. We examine three very different data sets. For each data set, we first satisfy ourselves that the preferred procedures used by some subjects are indeed less efficient than the recommended procedures. We then amass evidence, for each set, and conclude that when a preferred procedure is used instead of a more efficient, recommended procedure, the preferred procedure tends to have two major characteristics: (1) the preferred procedure is a well-practiced, generic procedure that is applicable either within the same task environment in different contexts or across different task environments, and (2) the preferred procedure is composed of interactive components that bring fast, incremental feedback on the external problem states. The support amassed for these characteristics leads to a new understanding of the paradox. In interactive tasks, people are biased towards the use of general procedures that start with interactive actions. These actions require much less cognitive effort as each action results in an immediate change to the external display that, in turn, cues the next action. Unfortunately for the users, the bias to use interactive unit tasks leads to a path that requires more effort in the long run. Our data suggest that interactive behavior is composed of a series of distributed choices; that is, people seldom make a once-and-for-all decision on procedures. This series of biased selection of interactive unit tasks often leads to a stable suboptimal level of performance. [source]