Home About us Contact | |||
Desirable
Terms modified by Desirable Selected AbstractsWhen is Commuting Desirable to the Individual?GROWTH AND CHANGE, Issue 3 2004David T. Ory ABSTRACT Commuting is popularly viewed as a stressful, costly, time-wasting experience from the individual perspective, with the attendant congestion imposing major social costs as well. However, several authors have noted that commuting can also offer benefits to the individual, serving as a valued transition between the home and work realms of personal life. Using survey data collected from about 1,300 commuting workers in three San Francisco Bay Area neighborhoods, empirical models are developed for four key variables measured for commute travel, namely: Objective Mobility, Subjective Mobility, Travel Liking, and Relative Desired Mobility. Explanatory variables include measures of general travel-related attitudes, personality traits, lifestyle priorities, and sociodemographic characteristics. Both descriptive statistics and analytical models indicate that commuting is not the unmitigated burden that it is widely perceived to be. About half of the sample were relatively satisfied with the amount they commute, with a small segment actually wanting to increase that amount. Both the psychological impact of commuting, and the amounts people want to commute relative to what they are doing now, are strongly influenced by their liking for commuting. An implication for policy is that some people may be more resistant than expected toward approaches intended to induce reductions in commuting (including, for example, telecommuting). New creativity may be needed to devise policies that recognize the inherent positive utility of travel, while trying to find socially beneficial ways to fulfill desires to maintain or increase travel. [source] Maintaining Case-Based Reasoners: Dimensions and DirectionsCOMPUTATIONAL INTELLIGENCE, Issue 2 2001David C. Wilson Experience with the growing number of large-scale and long-term case-based reasoning (CBR) applications has led to increasing recognition of the importance of maintaining existing CBR systems. Recent research has focused on case-base maintenance (CBM), addressing such issues as maintaining consistency, preserving competence, and controlling case-base growth. A set of dimensions for case-base maintenance, proposed by Leake and Wilson, provides a framework for understanding and expanding CBM research. However, it also has been recognized that other knowledge containers can be equally important maintenance targets. Multiple researchers have addressed pieces of this more general maintenance problem, considering such issues as how to refine similarity criteria and adaptation knowledge. As with case-base maintenance, a framework of dimensions for characterizing more general maintenance activity, within and across knowledge containers, is desirable to unify and understand the state of the art, as well as to suggest new avenues of exploration by identifying points along the dimensions that have not yet been studied. This article presents such a framework by (1) refining and updating the earlier framework of dimensions for case-base maintenance, (2) applying the refined dimensions to the entire range of knowledge containers, and (3) extending the theory to include coordinated cross-container maintenance. The result is a framework for understanding the general problem of case-based reasoner maintenance (CBRM). Taking the new framework as a starting point, the article explores key issues for future CBRM research. [source] Augmented reality agents for user interface adaptationCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008István Barakonyi Abstract Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi-user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd. [source] Facilitating process control teaching and learning in a virtual laboratory environmentCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2002T. Murphy Abstract The rapid pace of technological developments and the high cost of engineering equipment, pose several challenges to traditional modes of engineering education. Innovations in education are desirable. In particular, education on practical aspects of engineering and personnel training can be enhanced through the use of virtual laboratories. Such educative experiences allow a student to better understand the theoretical aspects of the discipline in addition to its integration with practical knowledge. In this work, the development, set-up and application of a virtual twin heat exchanger plant is described. The philosophy and methodology of our approach is described, including the implementation details and our experience in using it. The effectiveness of the platform in educating students and in training industrial personnel is described. © 2002 Wiley Periodicals, Inc. Comput Appl Eng Educ 10: 79,87, 2002; Published online in Wiley InterScience (www.interscience.wiley.com.); DOI 10.1002/cae.10011 [source] An Optimizing Compiler for Automatic Shader BoundingCOMPUTER GRAPHICS FORUM, Issue 4 2010Petrik Clarberg Abstract Programmable shading provides artistic control over materials and geometry, but the black box nature of shaders makes some rendering optimizations difficult to apply. In many cases, it is desirable to compute bounds of shaders in order to speed up rendering. A bounding shader can be automatically derived from the original shader by a compiler using interval analysis, but creating optimized interval arithmetic code is non-trivial. A key insight in this paper is that shaders contain metadata that can be automatically extracted by the compiler using data flow analysis. We present a number of domain-specific optimizations that make the generated code faster, while computing the same bounds as before. This enables a wider use and opens up possibilities for more efficient rendering. Our results show that on average 42,44% of the shader instructions can be eliminated for a common use case: single-sided bounding shaders used in lightcuts and importance sampling. [source] Calculation of Posterior Probabilities for Bayesian Model Class Assessment and Averaging from Posterior Samples Based on Dynamic System DataCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2010Sai Hung Cheung Because of modeling uncertainty, a set of competing candidate model classes may be available to represent a system and it is then desirable to assess the plausibility of each model class based on system data. Bayesian model class assessment may then be used, which is based on the posterior probability of the different candidates for representing the system. If more than one model class has significant posterior probability, then Bayesian model class averaging provides a coherent mechanism to incorporate all of these model classes in making probabilistic predictions for the system response. This Bayesian model assessment and averaging requires calculation of the evidence for each model class based on the system data, which requires the evaluation of a multi-dimensional integral involving the product of the likelihood and prior defined by the model class. In this article, a general method for calculating the evidence is proposed based on using posterior samples from any Markov Chain Monte Carlo algorithm. The effectiveness of the proposed method is illustrated by Bayesian model updating and assessment using simulated earthquake data from a ten-story nonclassically damped building responding linearly and a four-story building responding inelastically. [source] A 3-D Graphical Database System for Landfill Operations Using GPSCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2002H. Ping Tserng Landfill space is an important commodity for landfill companies. It is desirable to develop an efficient tool to assist space management and monitor space consumption. When recyclable wastes or particular waste materials need to be retrieved from the landfill site, the excavation operations become more difficult without an efficient tool to provide waste information (i.e., location and type). In this paper, a methodology and several algorithms are proposed to develop a 3-D graphical database system (GDS) for landfill operations. A 3-D GDS not only monitors the space consumption of a landfill site, but can also provide exact locations and types of compacted waste that would later benefit the landfill excavation operations or recycling programs after the waste is covered. [source] Effect of redundancy on the mean time to failure of wireless sensor networksCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2007Anh Phan Speer Abstract In data-driven wireless sensor networks (WSNs), the system must perform data sensing and retrieval and possibly aggregate data as a response at runtime. As a WSN is often deployed unattended in areas where replacements of failed sensors are difficult, energy conservation is of primary concern. While the use of redundancy is desirable in terms of satisfying user queries to cope with sensor and transmission faults, it may adversely shorten the lifetime of the WSN, as more sensor nodes will have to be used to answer queries, causing the energy of the system to drain quickly. In this paper, we analyze the effect of redundancy on the mean time to failure (MTTF) of a WSN in terms of the number of queries the system is able to answer correctly before it fails due to either sensor/transmission faults or energy depletion. In particular, we analyze the effect of redundancy on the MTTF of cluster-structured WSNs for energy conservations. We show that a tradeoff exists between redundancy and MTTF. Furthermore, an optimal redundancy level exists such that the MTTF of the system is maximized. Copyright © 2007 John Wiley & Sons, Ltd. [source] Adding tuples to Java: a study in lightweight data structuresCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5-6 2005C. van Reeuwijk Abstract Java classes are very flexible, but this comes at a price. The main cost is that every class instance must be dynamically allocated. Also, their access by reference introduces pointer de-references and complicates program analysis. These costs are particularly burdensome for small, ubiquitous data structures such as coordinates and state vectors. For such data structures a lightweight representation is desirable, allowing such data to be handled directly, similar to primitive types. A number of proposals introduce restricted or mutated variants of standard Java classes that could serve as lightweight representation, but the impact of these proposals has never been studied. Since we have implemented a Java compiler with lightweight data structures we are in a good position to do this evaluation. Our lightweight data structures are tuples. As we will show, using tuples can result in significant performance gains: for a number of existing benchmark programs we gain more than 50% in performance relative to our own compiler, and more than 20% relative to Sun's Hotspot 1.4 compiler. We expect similar performance gains for other implementations of lightweight data structures. With respect to the expressiveness of Java, lightweight variants of standard Java classes have little impact. In contrast, tuples add a different language construct that, as we will show, can lead to substantially more concise program code. Copyright © 2005 John Wiley & Sons, Ltd. [source] Efficient communication using message prediction for clusters of multiprocessorsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2002Ahmad Afsahi Abstract With the increasing uniprocessor and symmetric multiprocessor computational power available today, interprocessor communication has become an important factor that limits the performance of clusters of workstations/multiprocessors. Many factors including communication hardware overhead, communication software overhead, and the user environment overhead (multithreading, multiuser) affect the performance of the communication subsystems in such systems. A significant portion of the software communication overhead belongs to a number of message copying operations. Ideally, it is desirable to have a true zero-copy protocol where the message is moved directly from the send buffer in its user space to the receive buffer in the destination without any intermediate buffering. However, due to the fact that message-passing applications at the send side do not know the final receive buffer addresses, early arrival messages have to be buffered at a temporary area. In this paper, we show that there is a message reception communication locality in message-passing applications. We have utilized this communication locality and devised different message predictors at the receiver sides of communications. In essence, these message predictors can be efficiently used to drain the network and cache the incoming messages even if the corresponding receive calls have not yet been posted. The performance of these predictors, in terms of hit ratio, on some parallel applications are quite promising and suggest that prediction has the potential to eliminate most of the remaining message copies. We also show that the proposed predictors do not have sensitivity to the starting message reception call, and that they perform better than (or at least equal to) our previously proposed predictors. Copyright © 2002 John Wiley & Sons, Ltd. [source] General Gyrokinetic Equations for Edge PlasmasCONTRIBUTIONS TO PLASMA PHYSICS, Issue 7-9 2006H. Qin Abstract During the pedestal cycle of H-mode edge plasmas in tokamak experiments, large-amplitude pedestal build-up and destruction coexist with small-amplitude drift wave turbulence. The pedestal dynamics simultaneously includes fast time-scale electromagnetic instabilities, long time-scale turbulence-induced transport processes, and more interestingly the interaction between them. To numerically simulate the pedestal dynamics from first principles, it is desirable to develop an effective algorithm based on the gyrokinetic theory. However, existing gyrokinetic theories cannot treat fully nonlinear electromagnetic perturbations with multi-scale-length structures in spacetime, and therefore do not apply to edge plasmas. A set of generalized gyrokinetic equations valid for the edge plasmas has been derived. This formalism allows large-amplitude, time-dependent background electromagnetic fields to be developed fully nonlinearly in addition to small-amplitude, short-wavelength electromagnetic perturbations. It turns out that the most general gyrokinetic theory can be geometrically formulated. The Poincaré-Cartan-Einstein 1-form on the 7D phase space determines particles' worldlines in the phase space, and realizes the momentum integrals in kinetic theory as fiber integrals. The infinitesimal generator of the gyro-symmetry is then asymptotically constructed as the base for the gyrophase coordinate of the gyrocenter coordinate system. This is accomplished by applying the Lie coordinate perturbation method to the Poincaré-Cartan-Einstein 1-form. General gyrokinetic Vlasov-Maxwell equations are then developed as the Vlasov-Maxwell equations in the gyrocenter coordinate system, rather than a set of new equations. Because the general gyrokinetic system developed is geometrically the same as the Vlasov-Maxwell equations, all the coordinate-independent properties of the Vlasov-Maxwell equations, such as energy conservation, momentum conservation, and phase space volume conservation, are automatically carried over to the general gyrokinetic system. The pullback transformation associated with the coordinate transformation is shown to be an indispensable part of the general gyrokinetic Vlasov-Maxwell equations. As an example, the pullback transformation in the gyrokinetic Poisson equation is explicitly expressed in terms of moments of the gyrocenter distribution function, with the important gyro-orbit squeezing effect due to the large electric field shearing in the edge and the full finite Larmour radius effect for short wavelength fluctuations. The familiar "polarization drift density" in the gyrocenter Poisson equation is replaced by a more general expression. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] The Disclosure of UK Boardroom Pay: the March 2001 DTI proposalsCORPORATE GOVERNANCE, Issue 4 2001Martin J. Conyon In March 2001 the government announced that new disclosure rules relating to UK boardroom pay would be introduced. This paper critically evaluates these proposals. The new proposals emerged from the government's Directors Remuneration consultative document issued in July 1999. The current paper makes the following contributions to the governance literature. First, the new disclosure proposals are reviewed. I suggest that they are incomplete both in their detail and scope. I also suggest that the government has conceded that more US style executive compensation disclosure is required. Second, I describe US executive compensation disclosure practices. If convergence in disclosure practice is potentially desirable then a more systematic comparison and analysis of current disclosure policies in the two economies is warranted. [source] Guidelines for improving the reproducibility of quantitative multiparameter immunofluorescence measurements by laser scanning cytometry on fixed cell suspensions from human solid tumorsCYTOMETRY, Issue 1 2006Stanley Shackney Abstract Background: Laser scanning Cytometry (LSC) is a versatile technology that makes it possible to perform multiple measurements on individual cells and correlate them cell by cell with other cellular features. It would be highly desirable to be able to perform reproducible, quantitative, correlated cell-based immunofluorescence studies on individual cells from human solid tumors. However, such studies can be challenging because of the presence of large numbers of cell aggregates and other confounding factors. Techniques have been developed to deal with cell aggregates in data sets collected by LSC. Experience has also been gained in addressing other key technical and methodological issues that can affect the reproducibility of such cell-based immunofluorescence measurements. Methods and results: We describe practical aspects of cell sample collection, cell fixation and staining, protocols for performing multiparameter immunofluorescence measurements by LSC, use of controls and reference samples, and approaches to data analysis that we have found useful in improving the accuracy and reproducibility of LSC data obtained in human tumor samples. We provide examples of the potential advantages of LSC in examining quantitative aspects of cell-based analysis. Improvements in the quality of cell-based multiparameter immunofluorescence measurements make it possible to extract useful information from relatively small numbers of cells. This, in turn, permits the performance of multiple multicolor panels on each tumor sample. With links among the different panels that are provided by overlapping measurements, it is possible to develop increasingly more extensive profiles of intracellular expression of multiple proteins in clinical samples of human solid tumors. Examples of such linked panels of measurements are provided. Conclusions: Advances in methodology can improve cell-based multiparameter immunofluorescence measurements on cell suspensions from human solid tumors by LSC for use in prognostic and predictive clinical applications. © 2005 Wiley-Liss, Inc. [source] Neuropharmacological basis of combining antidepressantsACTA PSYCHIATRICA SCANDINAVICA, Issue 2005J. de la Gándara Objective:, To review the neuropharmacological basis of antidepressant combination therapy. Method:, Literature searches and other relevant material were obtained and reviewed. Results:, The overall clinical aim of combining antidepressants is to increase the efficacy whilst minimizing the side effects. Although such prescriptions are frequently based on the previous experience and knowledge, a sound neuropharmacological basis to support these combinations is desirable. When combining antidepressants, it is important to combine mechanisms of action, rather than simply one drug with another, and to aim for synergistic effects. The possibilities of combining mechanisms of action should also be exploited to the full if necessary, and the potential exists for combining two independent actions that have synergistic effects on the serotonergic, noradrenergic and even the dopaminergic systems. Conclusion:, Unfortunately, there are still, as yet, insufficient data to categorically justify choosing one or other combination based only on the neuropharmacological evidence. [source] Characterizing the ADHD phenotype for genetic studiesDEVELOPMENTAL SCIENCE, Issue 2 2005Jim Stevenson The genetic study of ADHD has made considerable progress. Further developments in the field will be reliant in part on identifying the most appropriate phenotypes for genetic analysis. The use of both categorical and dimensional measures of symptoms related to ADHD has been productive. The use of multiple reporters is a valuable feature of the characterization of psychopathology in children. It is argued that the use of aggregated measures to characterize the ADHD phenotype, particularly to establish its pervasiveness, is desirable. The recognition of the multiple comorbidities of ADHD can help to isolate more specific genetic influences. In relation to both reading disability and conduct disorder there is evidence that genes may be involved in the comorbid condition that are different from pure ADHD. To date, progress with the investigation of endophenotypes for ADHD has been disappointing. It is suggested that extending such studies beyond cognitive underpinnings to include physiological and metabolic markers might facilitate progress. [source] Effects of sulfonylureas on mitochondrial ATP-sensitive K+ channels in cardiac myocytes: implications for sulfonylurea controversyDIABETES/METABOLISM: RESEARCH AND REVIEWS, Issue 5 2006Toshiaki Sato Abstract Background Mitochondrial ATP-sensitive K+ (mitoKATP) channel plays a key role in cardioprotection. Hence, a sulfonylurea that does not block mitoKATP channels would be desirable to avoid damage to the heart. Accordingly, we examined the effects of sulfonylureas on the mitoKATP channel and mitochondrial Ca2+ overload. Methods Flavoprotein fluorescence in rabbit ventricular myocytes was measured to assay mitoKATP channel activity. The mitochondrial Ca2+ concentration was measured by loading cells with rhod-2. Results The mitoKATP channel opener diazoxide (100 µM) reversibly increased flavoprotein oxidation to 31.8 ± 4.3% (n = 5) of the maximum value induced by 2,4-dinitrophenol. Glimepiride (10 µM) alone did not oxidize the flavoprotein, and the oxidative effect of diazoxide was unaffected by glimepiride (35.4 ± 3.2%, n = 5). Similarly, the diazoxide-induced flavoprotein oxidation was unaffected both by gliclazide (10 µM) and by tolbutamide (100 µM). Exposure to ouabain (1 mM) for 30 min produced mitochondrial Ca2+ overload, and the intensity of rhod-2 fluorescence increased to 197.4 ± 7.2% of baseline (n = 11). Treatment with diazoxide significantly reduced the ouabain-induced mitochondrial Ca2+ overload (149.6 ± 5.1%, n = 11, p < 0.05 versus ouabain alone), and the effect was antagonized by the mitoKATP channel blocker 5-hydroxydecanoate (189.8 ± 27.8%, n = 5) and glibenclamide (193.1 ± 7.7%, n = 8). On the contrary, cardioprotective effect of diazoxide was not abolished by glimepiride (141.8 ± 7.8%, n = 6), gliclazide (139.0 ± 9.4%, n = 5), and tolbutamide (141.1 ± 4.5%, n = 7). Conclusions Our results indicate that glimepiride, gliclazide, and tolbutamide have no effect on mitoKATP channel, and do not abolish the cardioprotective effects of diazoxide. Therefore, these sulfonylureas, unlike glibenclamide, do not interfere with the cellular pathways that confer cardioprotection. Copyright © 2006 John Wiley & Sons, Ltd. [source] Nutrition in patients with Type 2 diabetes: are low-carbohydrate diets effective, safe or desirable?DIABETIC MEDICINE, Issue 7 2005R. L. Kennedy Abstract Low-carbohydrate diets have been around for over 100 years. They have become very popular recently but the scientific basis for their use remains to be fully established. This article reviews the recent trials that have been published and also what is known about the effects of low-carbohydrate, high-protein diets on energy expenditure and body composition. Although many controversies remain, there is now mounting evidence that these diets can lead to effective weight loss and may thus be a useful intervention for patients who have, or are at risk of, diabetes. The practical aspects of using these diets as a short- to medium-term intervention are discussed. [source] Discerning the Spirits, Practicing the Faiths: Why Be Lutheran?DIALOG, Issue 1 2002Martha Stortz Some spiritual wanderers today are "unstuck" in their faith, therefore, they have many faiths; while others are securely "stuck" in their tradition. Getting stuck is desirable, and the path is through spiritual practice. One's inner life and even perception of reality become transformed through daily habits such as prayer, worship, and discipline. The Lutheran insight that a practicing Christian is simultaneously saint and sinner offers comfort and honest self,understanding. [source] ENDOSCOPIC MUCOSAL RESECTION AND SUBMUCOSAL DISSECTION METHOD FOR LARGE COLORECTAL TUMORSDIGESTIVE ENDOSCOPY, Issue 2004Yasushi Sano ABSTRACT The goal of endoscopic mucosal resection (EMR) is to allow the endoscopist to obtain tissue or resect lesions not previously amenable to standard biopsy or excisional techniques and to remove malignant lesions without open surgery. In this article, we describe the results of conventional EMR and EMR using an insulation-tipped (IT) electrosurgical knife (submucosal dissection method) for large colorectal mucosal neoplasms and discuss the problems and future prospects of these procedures. At present, conventional EMR is much more feasible than EMR using IT-knife from the perspectives of time, money, complication, and organ preservation. However, larger lesions tend to be resected in a piecemeal fashion; and it is difficult to confirm whether EMR has been complete. For accurate histopathological assessment of the resected specimen en bloc EMR is desirable although further experience is needed to establish its safety and efficacy. Further improvements of in EMR with special knife techniques are required to simply and safely remove large colorectal neoplasms. [source] East Timor Emerging from Conflict: The Role of Local NGOs and International AssistanceDISASTERS, Issue 1 2001Ian Patrick International assistance efforts have represented a conundrum for East Timorese seeking to assert their new independence and autonomy. While urgent needs have been met, local participation, involvement and capacity building have not been given adequate attention. This outcome is aptly demonstrated in the case of local non-government organisations (LNGOs). This paper specifically examines the role of LNGOs in the recovery of East Timor within the international assistance programme. It examines the challenges of rehabilitation efforts in East Timor with a particular focus on capacity building of East Timorese NGOs as part of a broader effort to strengthen civil society. The initial crisis response in East Timor highlighted tension between meeting immediate needs while simultaneously incorporating civil society actors such as NGOs and communities. It has been argued that local NGOs and the community at large were not sufficiently incorporated into the process. While it is acknowledged that many local NGOs had limited capacity to respond, a greater emphasis on collaboration, inclusion and capacity building was desirable, with a view to supporting medium and longer term objectives that promote a vibrant civil society, sustainability and self-management. [source] Integrating DNA data and traditional taxonomy to streamline biodiversity assessment: an example from edaphic beetles in the Klamath ecoregion, California, USADIVERSITY AND DISTRIBUTIONS, Issue 5 2006Ryan M. Caesar ABSTRACT Conservation and land management decisions may be misguided by inaccurate or misinterpreted knowledge of biodiversity. Non-systematists often lack taxonomic expertise necessary for an accurate assessment of biodiversity. Additionally, there are far too few taxonomists to contribute significantly to the task of identifying species for specimens collected in biodiversity studies. While species level identification is desirable for making informed management decisions concerning biodiversity, little progress has been made to reduce this taxonomic deficiency. Involvement of non-systematists in the identification process could hasten species identification. Incorporation of DNA sequence data has been recognized as one way to enhance biodiversity assessment and species identification. DNA data are now technologically and economically feasible for most scientists to apply in biodiversity studies. However, its use is not widespread and means of its application has not been extensively addressed. This paper illustrates how such data can be used to hasten biodiversity assessment of species using a little-known group of edaphic beetles. Partial mitochondrial cytochrome oxidase I was sequenced for 171 individuals of feather-wing beetles (Coleoptera: Ptiliidae) from the Klamath ecoregion, which is part of a biodiversity hotspot, the California Floristic Province. A phylogram of these data was reconstructed via parsimony and the strict consensus of 28,000 equally parsimonious trees was well resolved except for peripheral nodes. Forty-two voucher specimens were selected for further identification from clades that were associated with many synonymous and non-synonymous nucleotide changes. A ptiliid taxonomic expert identified nine species that corresponded to monophyletic groups. These results allowed for a more accurate assessment of ptiliid species diversity in the Klamath ecoregion. In addition, we found that the number of amino acid changes or percentage nucleotide difference did not associate with species limits. This study demonstrates that the complementary use of taxonomic expertise and molecular data can improve both the speed and the accuracy of species-level biodiversity assessment. We believe this represents a means for non-systematists to collaborate directly with taxonomists in species identification and represents an improvement over methods that rely solely on parataxonomy or sequence data. [source] Ecological boundary detection using Carlin,Chib Bayesian model selectionDIVERSITY AND DISTRIBUTIONS, Issue 6 2005Ralph Mac Nally ABSTRACT Sharp ecological transitions in space (ecotones, edges, boundaries) often are where ecologically important events occur, such as elevated or reduced biodiversity or altered ecological functions (e.g. changes in productivity, pollination rates or parasitism loads, nesting success). While human observers often identify these transitions by using intuitive or gestalt assignments (e.g. the boundary between a remnant woodland patch and the surrounding farm paddock seems obvious), it is clearly desirable to make statistical assessments based on measurements. These assessments often are straightforward to make if the data are univariate, but identifying boundaries or transitions using compositional or multivariate data sets is more difficult. There is a need for an intermediate step in which pairwise similarities between points or temporal samples are computed. Here, I describe an approach that treats points along a transect as alternative hypotheses (models) about the location of the boundary. Carlin and Chib (1995) introduced a Bayesian technique for comparing non-hierarchical models, which I adapted to compute the probabilities of each boundary location (i.e. a model) relative to the ensemble of models constituting the set of possible points of the boundary along the transect. Several artificial data sets and two field data sets (on vegetation and soils and on cave-dwelling invertebrates and microclimates) are used to illustrate the approach. The method can be extended to cases in with several boundaries along a gradient, such as where there is an ecotone of non-zero thickness. [source] Buccal delivery of insulin: the time is nowDRUG DEVELOPMENT RESEARCH, Issue 7 2006*Article first published online: 16 NOV 200, Gerald Bernstein Abstract The burgeoning numbers of individuals with diabetes mellitus and prediabetes, in particular Type 2 including large numbers of children, open up not only the classic risks for microvascular disease but the earlier and incapacitating risk for macrovascular disease. Oral hypoglycemic agents and insulin sensitizers have not been adequate to control postprandial glucose. Prandial insulin is most desirable but resistance to injections limits its use. This has led to a battery of needle-free insulin delivery systems. Buccal delivery stands out as being safe, simple, fast, flexible, and familiar to patient and physician alike. Drug Dev. Res. 67:597,599, 2006. © 2006 Wiley-Liss, Inc. [source] Wavelet-based simulation of spectrum-compatible aftershock accelerogramsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2008S. Das Abstract In damage-based seismic design it is desirable to account for the ability of aftershocks to cause further damage to an already damaged structure due to the main shock. Availability of recorded or simulated aftershock accelerograms is a critical component in the non-linear time-history analyses required for this purpose, and simulation of realistic accelerograms is therefore going to be the need of the profession for a long time to come. This paper attempts wavelet-based simulation of aftershock accelerograms for two scenarios. In the first scenario, recorded main shock and aftershock accelerograms are available along with the pseudo-spectral acceleration (PSA) spectrum of the anticipated main shock motion, and an accelerogram has been simulated for the anticipated aftershock motion such that it incorporates temporal features of the recorded aftershock accelerogram. In the second scenario, a recorded main shock accelerogram is available along with the PSA spectrum of the anticipated main shock motion and PSA spectrum and strong motion duration of the anticipated aftershock motion. Here, the accelerogram for the anticipated aftershock motion has been simulated assuming that temporal features of the main shock accelerogram are replicated in the aftershock accelerograms at the same site. The proposed algorithms have been illustrated with the help of the main shock and aftershock accelerograms recorded for the 1999 Chi,Chi earthquake. It has been shown that the proposed algorithm for the second scenario leads to useful results even when the main shock and aftershock accelerograms do not share the same temporal features, as long as strong motion duration of the anticipated aftershock motion is properly estimated. Copyright © 2008 John Wiley & Sons, Ltd. [source] Right Ventricular Function Assessment: Comparison of Geometric and Visual Method to Short-Axis Slice Summation MethodECHOCARDIOGRAPHY, Issue 10 2007Daniel Drake M.D. Background: Short-axis summation (SAS) method applied for right ventricular (RV) volumes and right ventricular ejection fraction (RVEF) measurement with cardiac MRI is time consuming and cumbersome to use. A simplified RVEF measurement is desirable. We compare two such methods, a simplified ellipsoid geometric method (GM) and visual estimate, to the SAS method to determine their accuracy and reproducibility. Methods: Forty patients undergoing cine cardiac MRI scan were enrolled. The images acquired were analyzed by the SAS method, the GM (area and length measurement from two orthogonal planes) and visual estimate. RVEF was calculated using all three methods and RV volumes using the SAS and GM. Bland,Altman analysis was applied to test the agreement between the various measurements. Results: Mean RVEF was 49 ± 12% measured by SAS method, 54 ± 12% by the GM, and 49 ± 11% by visual estimate. There were similar bias and limits of agreement between the visual estimate and the GM compared to SAS. The interobserver variability showed a bias close to zero with limits of agreement within ±10% absolute increments of RVEF in 35 of the patients. The RV end-diastolic volume by GM showed wider limits of agreement. The RV end-systolic volume by GM was underestimated by around 10 ml compared to SAS. Conclusion: Both the visual estimate and the GM had similar bias and limits of agreement when compared to SAS. Though the end-systolic measurement is somewhat underestimated, the geometric method may be useful for serial volume measurements. [source] Plant species response to land use change ,Campanula rotundifolia, Primula veris and Rhinanthus minorECOGRAPHY, Issue 1 2005Regina Lindborg Land use change is a crucial driver behind species loss at the landscape scale. Hence, from a conservation perspective, species response to habitat degradation or improvement of habitat quality, is important to examine. By using indicator species it may be possible to monitor long-term survival of local populations associated with land use change. In this study we examined three potential indicator (response) species for species richness and composition in Scandinavian semi-natural grassland communities: Campanula rotundifolia, Primula veris and Rhinanthus minor. With field inventories and experiments we examined their response to present land use, habitat degradation and improvement of local habitat quality. At the time scale examined, C. rotundifolia was the only species responding to both habitat degradation and improvement of habitat quality. Neither R. minor nor P. veris responded positively to habitat improvements although both responded rapidly to direct negative changes in habitat quality. Even though C. rotundifolia responded quickly to habitat degradation, it did not disappear completely from the sites. Instead, the population structure changed in terms of decreased population size and flowering frequency. It also showed an ability to form remnant populations which may increase resilience of local habitats. Although P. veris and especially R. minor responded rapidly to negative environmental changes and may be useful as early indicators of land use change, it is desirable that indicators respond to both degradation and improvement of habitat quality. Thus, C. rotundifolia is a better response species for monitoring effects of land use change and conservation measures, provided that both local and regional population dynamics are monitored over a long time period. [source] Variability in responses to thermal stress in parasitoidsECOLOGICAL ENTOMOLOGY, Issue 6 2008GAËLLE AMICE Abstract 1.,To study phenotypic effects of stress, a stress is applied to cohorts of organisms with an increasing intensity. In the absence of mortality the response of traits will be a decreasing function of stress intensity because of increasing physiological costs. We call such decreasing functions type A responses. 2.,However, when stress caused mortality, some studies have found that for high stress intensities, survivors performed as well as control individuals (type B responses). We proposed that type A responses are caused by the physiological cost of stress whereas type B responses are caused by a mixture of physiological costs and selection. 3.,The present study exposed Aphidius picipes wasps to an increasing duration of cold storage (cold stress), and obtained variable responses as predicted when both physiological costs and selection of resistant individuals determine the outcome. 4.,When cold storage of parasitoids for biological control is desirable, research should be carried out to find (i) the temperature regime and duration of storage and (ii) the least sensitive stage for storage to minimise losses from mortality and reduction of fitness of survivors. 5.,Selection by cold stress as observed in the present study could result in rapid adaptation of populations exposed to such stress. [source] Resilience thinking: Interview with Brian WalkerECOLOGICAL MANAGEMENT & RESTORATION, Issue 2 2007Tein McDonald Summary This interview with Brian Walker, chair of the research-based Resilience Alliance, outlines the main concepts and propositions behind ,resilience thinking' and touches on the importance of this paradigm for individuals and organizations involved in managing complex social-ecological systems. It refers to the origins, work and publications of the Resilience Alliance, listing and elaborating the key case studies used to illustrate the Alliance's main proposition that complex social-ecological systems do not behave in a predictable linear fashion. Rather, research indicates it is normal for complex systems to go through cycles of increasing and decreasing resilience and to have potential to shift, (in a self-organising way) to potentially undesirable states or entirely new systems if certain component variables are severely impacted by management. Such shifts can be novel and ,surprising', and are often not beneficial or desirable for societies. This is particularly the case where small-scale solutions push the problem upwards in a system, causing loss of resilience at a global scale. Predicting thresholds is therefore important to managers and is a key research focus for members of the Resilience Alliance who are currently building an accessible database to support decision-making in global natural resource management. [source] Disentangling biodiversity effects on ecosystem functioning: deriving solutions to a seemingly insurmountable problemECOLOGY LETTERS, Issue 6 2003Shahid Naeem Abstract Experimental investigations of the relationship between biodiversity and ecosystem functioning (BEF) directly manipulate diversity then monitor ecosystem response to the manipulation. While these studies have generally confirmed the importance of biodiversity to the functioning of ecosystems, their broader significance has been difficult to interpret. The main reasons for this difficulty concern the small scales of the experiment, a bias towards plants and grasslands, and most importantly a general lack of clarity in terms of what attributes of functional diversity (FD) were actually manipulated. We review how functional traits, functional groups, and the relationship between functional and taxonomic diversity have been used in current BEF research. Several points emerged from our review. First, it is critical to distinguish between response and effect functional traits when quantifying or manipulating FD. Second, although it is widely done, using trophic position as a functional group designator does not fit the effect-response trait division needed in BEF research. Third, determining a general relationship between taxonomic and FD is neither necessary nor desirable in BEF research. Fourth, fundamental principles in community and biogeographical ecology that have been largely ignored in BEF research could serve to dramatically improve the scope and predictive capabilities of BEF research. We suggest that distinguishing between functional response traits and functional effect traits both in combinatorial manipulations of biodiversity and in descriptive studies of BEF could markedly improve the power of such studies. We construct a possible framework for predictive, broad-scale BEF research that requires integrating functional, community, biogeographical, and ecosystem ecology with taxonomy. [source] Consistent Regulation of Infrastructure Businesses: Some Economic Issues,ECONOMIC PAPERS: A JOURNAL OF APPLIED ECONOMICS AND POLICY, Issue 1 2009Flavio M. Menezes L51 This article examines some important economic issues associated with the notion that consistency in the regulation of infrastructure businesses is a desirable feature. It makes two important points. First, it is not easy to measure consistency. In particular, one cannot simply point to different regulatory parameters as evidence of inconsistent regulatory policy. Second, even if one does observe consistency emerging from decisions made by different regulators, it does not necessarily mean that this consistency is desirable. It might be the result, at least partially, of career concerns of regulators. [source] |