Worst Case (bad + case)

Distribution by Scientific Domains

Terms modified by Worst Case

  • bad case scenario

  • Selected Abstracts


    Worst Cases: Terror and Catastrophe in the Popular Imagination by Lee Clarke

    THE JOURNAL OF AMERICAN CULTURE, Issue 3 2006
    Arthur G. Neal
    No abstract is available for this article. [source]


    Utility Functions for Ceteris Paribus Preferences

    COMPUTATIONAL INTELLIGENCE, Issue 2 2004
    Michael McGeachie
    Ceteris paribus (all-else equal) preference statements concisely represent preferences over outcomes or goals in a way natural to human thinking. Although deduction in a logic of such statements can compare the desirability of specific conditions or goals, many decision-making methods require numerical measures of degrees of desirability. To permit ceteris paribus specifications of preferences while providing quantitative comparisons, we present an algorithm that compiles a set of qualitative ceteris paribus preferences into an ordinal utility function. Our algorithm is complete for a finite universe of binary features. Constructing the utility function can, in the worst case, take time exponential in the number of features, but common independence conditions reduce the computational burden. We present heuristics using utility independence and constraint-based search to obtain efficient utility functions. [source]


    Resource reservations with fuzzy requests

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2006
    T. Röblitz
    Abstract We present a scheme for reserving job resources with imprecise requests. Typical parameters such as the estimated runtime, the start time or the type or number of required CPUs need not be fixed at submission time but can be kept fuzzy in some aspects. Users may specify a list of preferences which guide the system in determining the best matching resources for the given job. Originally, the impetus for our work came from the need for efficient co-reservation mechanisms in the Grid where rigid constraints on multiple job components often make it difficult to find a feasible solution. Our method for handling fuzzy reservation requests gives the users more freedom to specify the requirements and it gives the Grid Reservation Service more flexibility to find optimal solutions. In the future, we will extend our methods to process co-reservations. We evaluated our algorithms with real workload traces from a large supercomputer site. The results indicate that our scheme greatly improves the flexibility of the solution process without having much affect on the overall workload of a site. From a user's perspective, only about 10% of the non-reservation jobs have a longer response time, and from a site administrator's view, the makespan of the original workload is extended by only 8% in the worst case. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Predicting the impact of climate change on Australia's most endangered snake, Hoplocephalus bungaroides

    DIVERSITY AND DISTRIBUTIONS, Issue 1 2010
    Trent D. Penman
    Abstract Aim, To predict how the bioclimatic envelope of the broad-headed snake (BHS) (Hoplocephalus bungaroides) may be redistributed under future climate warming scenarios. Location, South-eastern New South Wales, Australia. Methods, We used 159 independent locations for the species and 35 climatic variables to model the bioclimatic envelope for the BHS using two modelling approaches , Bioclim and Maxent. Predictions were made under current climatic conditions and we also predicted the species distribution under low and high climate change scenarios for 2030 and 2070. Results, Broad-headed snakes currently encompass their entire bioclimatic envelope. Both modelling approaches predict that suitable climate space for BHS will be lost to varying degrees under both climate warming scenarios, and under the worst case, only 14% of known snake populations may persist. Main conclusions, Areas of higher elevation within the current range will be most important for persistence of this species because they will remain relatively moist and cool even under climate change and will match the current climate envelope. Conservation efforts should focus on areas where suitable climate space may persist under climate warming scenarios. Long-term monitoring programs should be established both in these areas and where populations are predicted to become extirpated, so that we can accurately determine changes in the distribution of this species throughout its range. [source]


    Influence of line routing and terminations on transient overvoltages in LV power installations

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 8 2009
    Ibrahim A. Metwally
    Abstract IEC 62305-4 gives the rules for the selection and the installation of surge protective devices (SPDs), where the maximum enhancement factor is considered to be equal to 2 in the worst case of open-circuit condition. The objective of the present paper is to check this relation for equipment connected to low-voltage (LV) power system. The LV power system is considered as TN-S system with different routings in three- and six-storey buildings. The terminals of apparatus are substituted by a variety of different loads, namely, resistances, inductances, and capacitances. All Maxwell's equations are solved by the method of moments (MoM) and the voltage is calculated at the apparatus terminals. The SPD itself is simulated by a voltage source at the ground floor. The results reveal that the voltage at the apparatus terminals may overshoot the SPD protection level by a factor of 3 irrespective of the number of floors and loops. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Electromagnetic torque of a synchronous machine during a single out-of-phase reclosing

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2000
    A. C. Ammari
    The adoption of single-pole switching for a radial transmission line which connects large synchronous machines to the power system would submit these machines to repetitive mechanical stress. To evaluate the total mechanical stress, it is first necessary to determine the electromagnetic torque and to evaluate its transient maximum values at reclosing. In this paper, maximum values of the electromagnetic transient torque at single-pole reclosing are first computed using a simplified analytical approach. The analytical results are then validated by numerical simulations and by an experimental test on a laboratory synchronous machine. It will be shown that single-pole reclosing would be, in the worst case, as restricting as three out-of-phase synchronisation. [source]


    Integrated models of livestock systems for climate change studies.

    GLOBAL CHANGE BIOLOGY, Issue 2 2001

    Summary The potential impact of climate change by the year 2050 on intensive livestock systems in Britain is assessed through the use of simulation models of farming systems. The submodels comprise livestock feeding, livestock thermal balance and the thermal balance of controlled environment buildings and a stochastic weather generator. These are integrated to form system models for growing pigs and broiler chickens. They are applied to scenarios typical of SE England, which is the warmest region of the country and represents the worst case. For both species the frequency of severe heat stress is substantially increased, with a consequent risk of mortality. To offset this, it would be necessary to reduce stocking densities considerably, or to invest in improved ventilation or cooling equipment. Other effects on production are likely to be small. [source]


    A Decision-Making Framework for Sediment Contamination

    INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 3 2005
    Peter M. Chapman
    Abstract A decision-making framework for determining whether or not contaminated sediments are polluted is described. This framework is intended to be sufficiently prescriptive to standardize the decision-making process but without using "cook book" assessments. It emphasizes 4 guidance "rules": (1) sediment chemistry data are only to be used alone for remediation decisions when the costs of further investigation outweigh the costs of remediation and there is agreement among all stakeholders to act; (2) remediation decisions are based primarily on biology; (3) lines of evidence (LOE), such as laboratory toxicity tests and models that contradict the results of properly conducted field surveys, are assumed incorrect; and (4) if the impacts of a remedial alternative will cause more environmental harm than good, then it should not be implemented. Sediments with contaminant concentrations below sediment quality guidelines (SQGs) that predict toxicity to less than 5% of sediment-dwelling infauna and that contain no quantifiable concentrations of substances capable of biomagnifying are excluded from further consideration, as are sediments that do not meet these criteria but have contaminant concentrations equal to or below reference concentrations. Biomagnification potential is initially addressed by conservative (worst case) modeling based on benthos and sediments and, subsequently, by additional food chain data and more realistic assumptions. Toxicity (acute and chronic) and alterations to resident communities are addressed by, respectively, laboratory studies and field observations. The integrative decision point for sediments is a weight of evidence (WOE) matrix combining up to 4 main LOE: chemistry, toxicity, community alteration, and biomagnification potential. Of 16 possible WOE scenarios, 6 result in definite decisions, and 10 require additional assessment. Typically, this framework will be applied to surficial sediments. The possibility that deeper sediments may be uncovered as a result of natural or other processes must also be investigated and may require similar assessment. [source]


    Dynamic-window search for real-time simulation of dynamic systems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2003
    Sugjoon Yoon
    Abstract 1Various parameter values are provided in the form of tables, where data keys are ordered and unevenly spaced in general, for real-time simulation of system components or dynamics of vehicles such as airplanes, automobiles, ships, and so on. The main purpose of this study is to compare conventional searching algorithms and to find or develop the most efficient searching method under constraint of real-time simulation, especially hardware-in-the-loop simulation. Since the real-time constraint enforces use of a fixed step size in the integration of system differential equations because of the inherent nature of input from and output to real hardware, the worst case of iterated probes in searching algorithms is the key measure of comparison. If a parameter value has certain dynamics because of its relation with the state variables of the simulated system, the integration algorithm and the step size, a searching region at a given time frame can be reduced dramatically from the entire data table taking advantage of the information. The size of the reduced searching region, named dynamic-searching window, varies and the window moves by its own dynamics as simulation time runs. Numerous numerical experiments were conducted with various data tables of different sizes and types, and yielded results compatible with relevant theories. In conclusion, whether bisection or interpolation or fast search is used in real-time hardware-in-the-loop simulation, the combination of dynamic-window search guarantees a more stable and faster search of parameter values than using conventional algorithms alone. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    On the sample-complexity of ,, identification

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2001
    S. R. Venkatesh
    Abstract In this paper we derive the sample complexity for discrete time linear time-invariant stable systems described in the ,, topology. The problem set-up is as follows: the ,, norm distance between the unknown real system and a known finitely parameterized family of systems is bounded by a known real number. We can associate, for every feasible real system, a model in the finitely parameterized family that minimizes the ,, distance. The question now arises as to how long a data record is required to identify such a model from noisy input,output data. This question has been addressed in the context of l1, ,2 and several other topologies, and it has been shown that the sample-complexity is polynomial. Nevertheless, it turns out that for the ,, topology the sample-complexity in the worst case can be infinite. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    The acceptability to stakeholders of mandatory nutritional labelling in France and the UK , findings from the PorGrow project

    JOURNAL OF HUMAN NUTRITION & DIETETICS, Issue 1 2010
    M. Holdsworth
    Abstract Background:, Implementing a European Union (EU)-wide mandatory nutrition labelling scheme has been advocated as part a multi-pronged strategy to tackle obesity. The type of scheme needs to be acceptable to all key stakeholders. This study explored stakeholders' viewpoints of labelling in two contrasting food cultures (France and the UK) to see whether attitudes were influenced by sectoral interests and/or national context. Methods:, Using Multi Criteria Mapping, a decision analysis tool that assesses stakeholder viewpoints, quantitative and qualitative data were gathered during tape-recorded interviews. In France and the UK, 21 comparable stakeholders appraised nutritional labelling with criteria of their own choosing (i.e. feasibility, societal benefits, social acceptability, efficacy in addressing obesity, additional health benefits) and three criteria relating to cost (to industry; public sector; individuals). When scoring, interviewees provided both optimistic (best case) and pessimistic (worst case) judgements. Results:, Overall, mandatory nutritional labelling was appraised least favourably in France. Labelling performed worse under optimistic (best case) scenarios in France, for five out of eight sets of criteria. French stakeholders viewed labelling as expensive, having fewer benefits to society and as being marginally less effective than UK stakeholders did. However, French interviewees thought implementing labelling was feasible and would provide additional health benefits. British and French stakeholders made similar quantitative judgements on how socially acceptable mandatory labelling would be. Conclusions:, There is agreement between some stakeholder groups in the two different countries, especially food chain operators. However, cultural differences emerged that could influence the impact of an EU-wide mandatory labelling scheme in both countries. [source]


    A backoff strategy for model-based experiment design under parametric uncertainty

    AICHE JOURNAL, Issue 8 2010
    Federico Galvanin
    Abstract Model-based experiment design techniques are an effective tool for the rapid development and assessment of dynamic deterministic models, yielding the most informative process data to be used for the estimation of the process model parameters. A particular advantage of the model-based approach is that it permits the definition of a set of constraints on the experiment design variables and on the predicted responses. However, uncertainty in the model parameters can lead the constrained design procedure to predict experiments that turn out to be, in practice, suboptimal, thus decreasing the effectiveness of the experiment design session. Additionally, in the presence of parametric mismatch, the feasibility constraints may well turn out to be violated when that optimally designed experiment is performed, leading in the best case to less informative data sets or, in the worst case, to an infeasible or unsafe experiment. In this article, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure both feasibility and optimality of the planned experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session to keep the system within a feasible region with specified probability. This approach is particularly useful when designing optimal experiments starting from limited preliminary knowledge of the parameter set, with great improvement in terms of design efficiency and flexibility of the overall iterative model development scheme. The effectiveness of the proposed methodology is demonstrated and discussed by simulation through two illustrative case studies concerning the parameter identification of physiological models related to diabetes and cancer care. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


    The value of assessing weights in multi-criteria portfolio decision analysis

    JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2008
    Jeffrey M. KeislerArticle first published online: 28 SEP 200
    Abstract Analytic efforts in support of portfolio decisions can be applied with varying levels of intensity. To gain insight about how to match the effort to the situation, we simulate a portfolio of potential projects and compare portfolio performance under a range of analytic strategies. Each project is scored with respect to several attributes in a linear additive value model. Projects are ranked in order of value per unit cost and funded until the budget is exhausted. Assuming these weights and scores are correct, and the funding decisions made this way are optimal, this process is a gold standard against which to compare other decision processes. In particular, a baseline process would fund projects essentially at random, and we may estimate the value added by various decision processes above this worst case as a percentage of the increase arising from the optimal process. We consider several stylized decision rules and combinations of them: using equal weights, picking one attribute at random, assessing weights from a single randomly selected stakeholder. Simulation results are then used to identify which conditions tend to make which types of analytic strategies valuable, and to identify useful hybrid strategies. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A risk-based approach to establish stability testing conditions for tropical countries,,

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 5 2006
    Manuel Zahn
    Abstract The external stability risk factors heat and moisture are evaluated with respect to the development of pharmaceutical products intended to be marketed in tropical and subtropical countries. The mean daily temperatures and dewpoints measured four times per day at selected places in Southeast Asia, South America, China, Southern Africa and the Caribbean are used to calculate the daily and monthly fluctuations of temperature and partial water vapour pressure, the mean kinetic temperature and the relative humidity. Based on these data, the hottest and the most humid place in each country or region are identified to reflect the worst case for the specific region. A formula to calculate safety margins for temperature and partial vapour pressure is introduced taking into consideration the difference between measured meteorological parameters and the stability testing conditions. An appropriate long-term stability testing condition is proposed for each selected country, related to the worst case for each specific region and the safety margins, as well as its classification in either Climatic Zone IVA or IVB. © 2006 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 95:946,965, 2006 [source]


    Broadband planar DTV antenna in the portable media player held by the user's hands

    MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 8 2007
    Wei-Yu Li
    Abstract Effects of the user's hands on the broadband planar DTV antenna in the portable media player (PMP) are studied. The antenna has a wide operating bandwidth covering the 470,806 MHz band for DTV signal reception and a one-layer equivalent simulation hand model including the user's forearm with a relative permittivity of 33.5 and a conductivity of 0.47 S/m is used for the simulation study. Three different conditions of the user's hands (right hand only, left hand only, and both hands) holding the PMP are studied, and their effects on the return loss, radiation efficiency, and radiation patterns of the studied DTV antenna are analyzed. In addition, effects of the user's hands holding the PMP at different positions relative to the studied DTV antenna are analyzed. Results have shown that, for the worst case, the radiation efficiency of the studied DTV antenna is still larger than 60% over the operating band, making the antenna very promising for practical applications. © 2007 Wiley Periodicals, Inc. Microwave Opt Technol Lett 49: 1841,1844, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.22639 [source]


    On the first come,first served rule in multi-echelon inventory control

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2007
    Sven Axsäter
    Abstract A two-echelon distribution inventory system with a central warehouse and a number of retailers is considered. The retailers face stochastic demand and replenish from the warehouse, which, in turn, replenishes from an outside supplier. The system is reviewed continuously and demands that cannot be met directly are backordered. Standard holding and backorder costs are considered. In the literature on multi-echelon inventory control it is standard to assume that backorders at the warehouse are served according to a first come,first served policy (FCFS). This allocation rule simplifies the analysis but is normally not optimal. It is shown that the FCFS rule can, in the worst case, lead to an asymptotically unbounded relative cost increase as the number of retailers approaches infinity. We also provide a new heuristic that will always give a reduction of the expected costs. A numerical study indicates that the average cost reduction when using the heuristic is about two percent. The suggested heuristic is also compared with two existing heuristics. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007 [source]


    Algorithms for the multi-item multi-vehicles dynamic lot sizing problem

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 2 2006
    Shoshana Anily
    Abstract We consider a two-stage supply chain, in which multi-items are shipped from a manufacturing facility or a central warehouse to a downstream retailer that faces deterministic external demand for each of the items over a finite planning horizon. The items are shipped through identical capacitated vehicles, each incurring a fixed cost per trip. In addition, there exist item-dependent variable shipping costs and inventory holding costs at the retailer for items stored at the end of the period; these costs are constant over time. The sum of all costs must be minimized while satisfying the external demand without backlogging. In this paper we develop a search algorithm to solve the problem optimally. Our search algorithm, although exponential in the worst case, is very efficient empirically due to new properties of the optimal solution that we found, which allow us to restrict the number of solutions examined. Second, we perform a computational study that compares the empirical running time of our search methods to other available exact solution methods to the problem. Finally, we characterize the conditions under which each of the solution methods is likely to be faster than the others and suggest efficient heuristic solutions that we recommend using when the problem is large in all dimensions. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2006. [source]


    The two-median problem on Manhattan meshes

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 3 2007
    Mordecai J. Golin
    Abstract We investigate the two-median problem on a mesh with M columns and N rows (M , N), under the Manhattan (L1) metric. We derive exact algorithms with respect to m, n, and r, the number of columns, rows, and vertices, respectively, that contain requests. Specifically, we give an O(mn2 log m) time, O(r) space algorithm for general (nonuniform) meshes (assuming m , n). For uniform meshes, we give two algorithms both using O(MN) space. One is an O(MN2) time algorithm, while the other is an algorithm running in O(MN log N) time with high probability and in O(MN2) time in the worst case assuming the weights are independent and identically distributed random variables satisfying certain natural conditions. These improve upon the previously best-known algorithm that runs in O(mn2r) time. © 2007 Wiley Periodicals, Inc. NETWORKS, Vol. 49(3), 226,233 2007 [source]


    Distributed delay constrained multicast routing algorithm with efficient fault recovery

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2006
    Hasan Ural
    Abstract Existing distributed delay constrained multicast routing algorithms construct a multicast tree in a sequential fashion and need to be restarted when failures occur during the multicast tree construction phase or during an on-going multicast session. This article proposes an efficient distributed delay constrained multicast routing algorithm that constructs a multicast tree in a concurrent fashion by taking advantage of the concurrency in the underlying distributed computation. The proposed algorithm has a message complexity of O(mn) and time complexity of O(n) in the worst case, where m is the number of destinations and n is the number of nodes in the network. It constructs multicast trees with the same tree costs as the ones constructed by well-known algorithms such as DKPP and DSHP while utilizing 409 to 1734 times fewer messages and 56 to 364 times less time than these algorithms under comparable success rate ratios. The proposed algorithm has been augmented with a fault recovery mechanism that efficiently constructs a multicast tree when failures occur during the tree construction phase and recovers from any failure in the multicast tree during an on-going multicast session without interrupting the running traffic on the unaffected portion of the tree. © 2005 Wiley Periodicals, Inc. NETWORKS, Vol. 47(1), 37,51 2006 [source]


    Modular solvers for image restoration problems using the discrepancy principle

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002
    Peter Blomgren
    Abstract Many problems in image restoration can be formulated as either an unconstrained non-linear minimization problem, usually with a Tikhonov -like regularization, where the regularization parameter has to be determined; or as a fully constrained problem, where an estimate of the noise level, either the variance or the signal-to-noise ratio, is available. The formulations are mathematically equivalent. However, in practice, it is much easier to develop algorithms for the unconstrained problem, and not always obvious how to adapt such methods to solve the corresponding constrained problem. In this paper, we present a new method which can make use of any existing convergent method for the unconstrained problem to solve the constrained one. The new method is based on a Newton iteration applied to an extended system of non-linear equations, which couples the constraint and the regularized problem, but it does not require knowledge of the Jacobian of the irregularity functional. The existing solver is only used as a black box solver, which for a fixed regularization parameter returns an improved solution to the unconstrained minimization problem given an initial guess. The new modular solver enables us to easily solve the constrained image restoration problem; the solver automatically identifies the regularization parameter, during the iterative solution process. We present some numerical results. The results indicate that even in the worst case the constrained solver requires only about twice as much work as the unconstrained one, and in some instances the constrained solver can be even faster. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A Novel System for Spectral Analysis of Solar Radiation within a Mixed Beech-Spruce Stand

    PLANT BIOLOGY, Issue 2 2002
    H. Reitmayer
    Abstract: A multi-sensor system is described, based on a 1024 channel diode array spectrometer, to measure spectral radiant flux density in the range of 380 nm to 850 nm, with a resolution of 0.8 nm in minimal 16 milliseconds integration time per sensor (noon, clear sky conditions). 264 space-integrating 4, sensors deployed in the canopies and 2 m above stand floor are sequentially connected to the spectrometer by means of 30-m long fibre optics. During low-level conditions (dawn, overcast sky) the system automatically lengthens the integration time of the spectrometer. About 3 sec per sensor, i.e., 13 min for the total of 264 sensors (worst case) are needed to collect spectral energy data, store them on hard disk and move the channel multiplexer to the next fibre optic position. The detection limit of quartz fibre sensors is 0.2 W/m2; precision and absolute error of radiant flux density are smaller than 3 % and 10 %, respectively. The system, operating since 1999, is derived from a 20-sensor pilot system developed for PAR measurements (PMMA fibre sensor, 400nm to 700 nm). Data achieved with the system serve to determine vertical profiles of wavelength dependent radiation extinction, with special respect to R/FR ratios and to develop a model of spectral radiation distribution in a mature forest stand, prerequisites for the computation of carbon gain of the stand and the evaluation of stand growth models. [source]


    The simple random walk and max-degree walk on a directed graph

    RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2009
    Ravi Montenegro
    Abstract We bound total variation and L, mixing times, spectral gap and magnitudes of the complex valued eigenvalues of general (nonreversible nonlazy) Markov chains with a minor expansion property. The resulting bounds for the (nonlazy) simple and max-degree walks on a (directed) graph are of the optimal order. It follows that, within a factor of two or four, the worst case of each of these mixing time and eigenvalue quantities is a walk on a cycle with clockwise drift. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2009 [source]


    A spectral heuristic for bisecting random graphs,

    RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2006
    Amin Coja-OghlanArticle first published online: 27 DEC 200
    The minimum bisection problem is to partition the vertices of a graph into two classes of equal size so as to minimize the number of crossing edges. Computing a minimum bisection is NP-hard in the worst case. In this paper we study a spectral heuristic for bisecting random graphs Gn(p,p,) with a planted bisection obtained as follows: partition n vertices into two classes of equal size randomly, and then insert edges inside the two classes with probability p, and edges crossing the partition with probability p independently. If , where c0 is a suitable constant, then with probability 1 , o(1) the heuristic finds a minimum bisection of Gn(p,p,) along with a certificate of optimality. Furthermore, we show that the structure of the set of all minimum bisections of Gn(p,p,) undergoes a phase transition as . The spectral heuristic solves instances in the subcritical, the critical, and the supercritical phases of the phase transition optimally with probability 1 , o(1). These results extend previous work of Boppana Proc. 28th FOCS (1987) 280,285. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source]


    The Random-Facet simplex algorithm on combinatorial cubes,

    RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2002
    Bernd Gärtner
    The RANDOM -FACET algorithm is a randomized variant of the simplex method which is known to solve any linear program with n variables and m constraints using an expected number of pivot steps which is subexponential in both n and m. This is the theoretically fastest simplex algorithm known to date if m , n; it provably beats most of the classical deterministic variants which require exp(,(n)) pivot steps in the worst case. RANDOM -FACET has independently been discovered and analyzed ten years ago by Kalai as a variant of the primal simplex method, and by Matous,ek, Sharir, and Welzl in a dual form. The essential ideas and results connected to RANDOM -FACET can be presented in a particularly simple and instructive way for the case of linear programs over combinatorialn - cubes. I derive an explicit upper bound of (1) on the expected number of pivot steps in this case, using a new technique of "fingerprinting" pivot steps. This bound also holds for generalized linear programs, similar flavors of which have been introduced and studied by several researchers. I then review an interesting class of generalized linear programs, due to Matous,ek, showing that RANDOM -FACET may indeed require an expected number of pivot steps in the worst case. The main new result of the paper is a proof that all actual linear programs in Matous,ek's class are solved by RANDOM -FACET with an expected polynomial number of pivot steps. This proof exploits a combinatorial property of linear programming which has only recently been discovered by Holt and Klee. The result establishes the first scenario in which an algorithm that works for generalized linear programs "recognizes" proper linear programs. Thus, despite Matous,ek's worst-case result, the question remains open whether RANDOM -FACET (or any other simplex variant) is a polynomial-time algorithm for linear programming. Finally, I briefly discuss extensions of the combinatorial cube results to the general case. © 2002 Wiley Periodicals, Inc. Random Struct. Alg., 20:353,381, 2002 [source]


    Design and Test of a Vascular Access Device

    ARTIFICIAL ORGANS, Issue 5 2000
    Gijsbertus Jacob Verkerke
    Abstract: Transarterial left ventricular assist devices (LVADs), such as the Hemopump, IABP, and PUCA-pump, are meant to be introduced into the body via the femoral or axillary artery without major surgery. For certain applications, introduction is performed directly into the aorta via an open thorax procedure. A prototype of a vascular access device has been realized that allows direct access into the aorta as an alternative for the common surgical graft anastomosis suturing technique. The device consists of a metal tube acting as a circular knife to cut a hole in the aortic wall, a screw to store the removed part of the aortic wall, and a plastic tube that is introduced through the hole and tightly connected to the aortic wall. The device could be placed without aortic clamping. The device has been tested on a slaughterhouse porcine aorta. A low-pressurized aorta appeared to be the worst case; thus, two animal experiments in the low-pressurized pulmonary artery were performed. No leakage occurred for pressures between 40 and 300 mm Hg. [source]


    EXACT P -VALUES FOR DISCRETE MODELS OBTAINED BY ESTIMATION AND MAXIMIZATION

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2008
    Chris J. Lloyd
    Summary In constructing exact tests from discrete data, one must deal with the possible dependence of the P -value on nuisance parameter(s) , as well as the discreteness of the sample space. A classical but heavy-handed approach is to maximize over ,. We prove what has previously been understood informally, namely that maximization produces the unique and smallest possible P -value subject to the ordering induced by the underlying test statistic and test validity. On the other hand, allowing for the worst case will be more attractive when the P -value is less dependent on ,. We investigate the extent to which estimating , under the null reduces this dependence. An approach somewhere between full maximization and estimation is partial maximization, with appropriate penalty, as introduced by Berger & Boos (1994, P values maximized over a confidence set for the nuisance parameter. J. Amer. Statist. Assoc.,89, 1012,1016). It is argued that estimation followed by maximization is an attractive, but computationally more demanding, alternative to partial maximization. We illustrate the ideas on a range of low-dimensional but important examples for which the alternative methods can be investigated completely numerically. [source]


    One-step RNA pathogen detection with reverse transcriptase activity of a mutated thermostable Thermus aquaticus DNA polymerase

    BIOTECHNOLOGY JOURNAL, Issue 2 2010
    Ramon Kranaster
    Abstract We describe the cloning and characterization of a mutated thermostable DNA polymerase from Thermus aquaticus (Taq) that exhibits an increased reverse transcriptase activity and is therefore designated for one-step PCR pathogen detection using established real-time detection methods. We demonstrate that this Taq polymerase mutant (Taq M1) has similar PCR sensitivity and nuclease activity as the respective Taq wild-type DNA polymerase. In addition, and in marked contrast to the wild-type, Taq M1 exhibits a significantly increased reverse transcriptase activity especially at high temperatures (>60°C). RNA generally hosts highly stable secondary structure motifs, such as hairpins and G-quadruplexes, which complicate, or in the worst case obviate, reverse transcription (RT). Thus, RT at high temperatures is desired to weaken or melt secondary structure motifs. To demonstrate the ability of Taq M1 for RNA detection of pathogens, we performed TaqMan probe-based diagnostics of Dobrava viruses by one-step RT-PCR. We found similar detection sensitivities compared to commercially available RT-PCR systems without further optimization of reaction parameters, thus making this enzyme highly suitable for any PCR probe-based RNA detection method. [source]


    10-year prevalence of contact allergy in the general population in Denmark estimated through the CE-DUR method

    CONTACT DERMATITIS, Issue 4 2007
    Jacob Pontoppidan Thyssen
    The prevalence of contact allergy in the general population has traditionally been investigated through population-based epidemiological studies. A different approach is the combination of clinical epidemiological (CE) data and the World Health Organization-defined drug utilization research (DUR) method. The CE-DUR method was applied in Denmark to estimate the prevalence of contact allergy in the general population and compare it with the prevalence estimates from the Glostrup allergy studies. Contact allergy prevalence estimates ranging from very liberal (,worst case') to conservative (,best case') assumptions were based on patch test reading data in combination with an estimate of the number of persons eligible for patch testing each year based on sales data of the ,standard series'. The estimated 10-year prevalence of contact allergy ranged between 7.3% and 12.9% for adult Danes older than 18 years. The 10-year prevalence of contact allergy measured by CE-DUR was slightly lower than previous prevalence estimates from the Glostrup allergy studies. This could probably be explained by a decrease in nickel allergy. The CE-DUR approach holds the potential of being an efficient and easy monitoring method of contact allergy prevalence. [source]


    International Prosecutions and Domestic Politics: The Use of Truth Commissions as Compromise Justice in Serbia and Croatia

    INTERNATIONAL STUDIES REVIEW, Issue 4 2009
    Brian Grodsky
    Since the end of the Cold War, increased efforts at international criminal justice have begun to transform transitional justice for the worst cases of atrocities from a predominantly domestic affair to an international one. I examine side-effects of international pressure for criminal justice, arguing that political elites struggling to balance conflicting international and domestic demands may launch "compromise justice" policies designed to satisfy both, but which in effect weaken mechanisms that transitional justice scholars posit make postconflict reconciliation most likely. I apply this argument to the former Yugoslavia, examining Serbian and Croatian truth commissions as a form of "compromise justice." [source]


    Sexual Abuse of Boys

    JOURNAL OF CHILD AND ADOLESCENT PSYCHIATRIC NURSING, Issue 1 2005
    Sharon M. Valente RN
    TOPIC:, Sexual abuse in childhood can disable self-esteem, self-concept, relationships, and ability to trust. It can also leave psychological trauma that compromises a boy's confidence in adults. While some boys who willingly participate may adjust to sexual abuse, many others face complications, such as reduced quality of life, impaired social relationships, less than optimal daily functioning, and self-destructive behavior. These problems can respond to treatment if detected. PURPOSE:, In this paper, we examine the prevalence, characteristics, psychological consequences, treatment, and coping patterns of boys who have been sexually abused and their failure to disclose abuse unless asked during a therapeutic encounter. Nurses have a responsibility to detect the clues to sexual abuse, diagnose the psychological consequences, and advocate for protection and treatment. SOURCES USED:, Computerized literature search of the Medline and PsychInfo literature and books on sexual abuse of boys. CONCLUSIONS:, Psychological responses to abuse such as anxiety, denial, self-hypnosis, dissociation, and self-mutilation are common. Coping strategies may include being the angry avenger, the passive victim, rescuer, daredevil, or conformist. Sexual abuse may precipitate runaway behavior, chronic use of sick days, poor school or job performance, costly medical, emergency and or mental health visits. In worst cases, the boy may decide that life is not worth living and plan suicide. The nurse has a key role to play in screening, assessing, and treating sexual abuse children. [source]