Failure Probability (failure + probability)

Distribution by Scientific Domains


Selected Abstracts


Design of a composite beam using the failure probability-safety factor method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2005
E. Castillo
Abstract The paper shows the practical importance of the failure probability-safety factor method for designing engineering works. The method provides an automatic design tool by optimizing an objective function subject to the standard geometric and code constraints, and two more sets of constraints, that guarantee some given safety factors and failure probability bounds, associated with a given set of failure modes. Since a direct solution of the optimization problem is not possible, the method proceeds as a sequence of three steps: (a) an optimal classical design, based on given safety factors, is done, (b) failure probabilities or bounds of all failure modes are calculated, and (c) safety factors bounds are adjusted. This implies a double safety check that leads to safer structures and designs less prone to wrong or unrealistic probability assumptions, and to excessively small (unsafe) or large (costly) safety factors. Finally, the actual global or combined probabilities of the different failure modes and their correlation are calculated using a Monte Carlo simulation. In addition, a sensitivity analysis is performed. To this end, the optimization problems are transformed into another equivalent ones, in which the data parameters are converted into artificial variables. In this way, some variables of the dual associated problems become the desired sensitivities. The method is illustrated by its application to the design of a composite beam. Copyright 2004 © John Wiley & Sons, Ltd. [source]


Set theoretic formulation of performance reliability of multiple response time-variant systems due to degradations in system components

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2007
Young Kap Son
Abstract This paper presents a design stage method for assessing performance reliability of systems with multiple time-variant responses due to component degradation. Herein the system component degradation profiles over time are assumed to be known and the degradation of the system is related to component degradation using mechanistic models. Selected performance measures (e.g. responses) are related to their critical levels by time-dependent limit-state functions. System failure is defined as the non-conformance of any response and unions of the multiple failure regions are required. For discrete time, set theory establishes the minimum union size needed to identify a true incremental failure region. A cumulative failure distribution function is built by summing incremental failure probabilities. A practical implementation of the theory can be manifest by approximating the probability of the unions by second-order bounds. Further, for numerical efficiency probabilities are evaluated by first-order reliability methods (FORM). The presented method is quite different from Monte Carlo sampling methods. The proposed method can be used to assess mean and tolerance design through simultaneous evaluation of quality and performance reliability. The work herein sets the foundation for an optimization method to control both quality and performance reliability and thus, for example, estimate warranty costs and product recall. An example from power engineering shows the details of the proposed method and the potential of the approach. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Routing complexity of faulty networks

RANDOM STRUCTURES AND ALGORITHMS, Issue 1 2008
Omer Angel
Abstract One of the fundamental problems in distributed computing is how to efficiently perform routing in a faulty network in which each link fails with some probability. This article investigates how big the failure probability can be, before the capability to efficiently find a path in the network is lost. Our main results show tight upper and lower bounds for the failure probability, which permits routing both for the hypercube and for the d -dimensional mesh. We use tools from percolation theory to show that in the d -dimensional mesh, once a giant component appears,efficient routing is possible. A different behavior is observed when the hypercube is considered. In the hypercube there is a range of failure probabilities in which short paths exist with high probability, yet finding them must involve querying essentially the entire network. Thus the routing complexity of the hypercube shows an asymptotic phase transition. The critical probability with respect to routing complexity lies in a different location than that of the critical probability with respect to connectivity. Finally we show that an oracle access to links (as opposed to local routing) may reduce significantly the complexity of the routing problem. We demonstrate this fact by providing tight upper and lower bounds for the complexity of routing in the random graph Gn,p. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source]


Ground motion duration effects on nonlinear seismic response

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2006
Iunio Iervolino
Abstract The study presented in this paper addresses the question of which nonlinear demand measures are sensitive to ground motion duration by statistical analyses of several case studies. A number of single degree of freedom (SDOF) structures were selected considering: (1) four oscillation periods; (2) three evolutionary and non-evolutionary hysteretic behaviours; (3) two target ductility levels. Effects of duration are investigated, by nonlinear dynamic analysis, with respect to six different demand indices ranging from displacement ductility ratio to equivalent number of cycles. Input is made of six real accelerogram sets representing three specific duration scenarios (small, moderate and large duration). For all considered demand quantities time-history results are formally compared by statistical hypothesis test to asses the difference, if any, in the demand concerning different scenarios. Incremental dynamic analysis curves are used to evaluate duration effect as function of ground motion intensity (e.g. spectral acceleration corresponding to the SDOF's oscillation period). Duration impact on structural failure probability is evaluated by fragility curves. The results lead to the conclusion that duration content of ground motion is statistically insignificant to displacement ductility and cyclic ductility demand. The conclusions hold regardless of SDOF's period and hysteretic relationship investigated. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Reliability modelling of uninterruptible power supply systems using fault tree analysis method

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2009
Mohd Khairil Rahmat
Abstract The aim of this paper is to investigate the reliability parameters estimation method for the uninterruptible power supply (UPS) systems using the Fault Tree Analysis (FTA) technique. FTA is a top,down approach to identify all potential causes leading to system failure. The computation of the system's failure probability is the main goal of this analysis, as this value can be used to calculate other important system reliability parameters such as failure rates, mean time between failures and reliability. In this paper, the FTA method was applied to five different UPS topologies and the results obtained were compared and discussed in detail. By comparing the critical fault path of the system, it was found that the inverter failures contributed most significantly to the system failure. It was also found that the probability of failure of a UPS system can be reduced by the inclusion of bypass supply, given that the failure rate of the events that causing the failure of the bypass supply should be lower compared to the ones for the main utility supply. Finally, to validate the results obtained from this method, comparisons were made to the results from other methods such as the Reliability Block Diagram, Boolean Truth Table, Probability Tree, Monte-Carlo Simulation and Field Data reliability estimation methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Rapid risk assessment using probability of fracture nomographs

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2009
R. PENMETSA
ABSTRACT Traditional risk-based design process involves designing the structure based on risk estimates obtained during several iterations of an optimization routine. This approach is computationally expensive for large-scale aircraft structural systems. Therefore, this paper introduces the concept of risk-based design plots that can be used for both structural sizing and risk assessment for fracture strength when maximum allowable crack length is available. In situations when crack length is defined as a probability distribution the presented approach can only be applied for various percentiles of crack lengths. These plots are obtained using normalized probability density models of load and material properties and are applicable for any arbitrary load and strength values. Risk-based design plots serve as a tool for failure probability assessment given geometry and applied load or they can determine geometric constraints to be used in sizing given allowable failure probability. This approach would transform a reliability-based optimization problem into a deterministic optimization problem with geometric constraints that implicitly incorporate risk into the design. In this paper, cracked flat plate and stiffened plate are used to demonstrate the methodology and its applicability. [source]


A statistical model for cleavage fracture in notched specimens of C,Mn steel

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 7 2001
G. Z. Wang
A statistical model for cleavage fracture in notched specimens of C-Mn steel has been proposed. This model is based on a recently suggested physical model. This statistical model satisfactorily describes the distributions of the cumulative failure probability and failure probability density of 36 notched specimens fractured at various loads at test temperature of ,196 °C. The minimum notch toughness has also been discussed. [source]


High-dimensional model representation for structural reliability analysis

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2009
Rajib Chowdhury
Abstract This paper presents a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry. The method involves high-dimensional model representation (HDMR) that facilitates lower-dimensional approximation of the original high-dimensional implicit limit state/performance function, response surface generation of HDMR component functions, and Monte Carlo simulation. HDMR is a general set of quantitative model assessment and analysis tools for capturing the high-dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher-order variable correlations are weak, allowing the physical model to be captured by the first few lower-order terms. Once the approximate form of the original implicit limit state/performance function is defined, the failure probability can be obtained by statistical simulation. Results of nine numerical examples involving mathematical functions and structural mechanics problems indicate that the proposed method provides accurate and computationally efficient estimates of the probability of failure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Soft real-time communications over Bluetooth under interferences from ISM devices

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2006
J. L. Sevillano
Abstract Bluetooth is a suitable technology to support soft real-time applications like multimedia streams at the personal area network level. In this paper, we analytically evaluate the worst-case deadline failure probability of Bluetooth packets under co-channel interference as a way to provide statistical guarantees when transmitting soft real-time traffic using ACL links. We consider the interference from independent Bluetooth devices, as well as from other devices operating in the ISM band like 802.11b/g and Zigbee. Finally, we show as an example how to use our model to obtain some results for the transmission of a voice stream. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Lifetime prediction of CAD/CAM dental ceramics,

JOURNAL OF BIOMEDICAL MATERIALS RESEARCH, Issue 6 2002
Ulrich Lohbauer
Abstract The dynamic fatigue method was used to obtain subcritical crack growth parameters n and A for a commercial feldspathic dental porcelain and for a lanthanum-glass-infiltrated alumina glass ceramic. Five stress rates d,/dt ranging from 50 to 0.01 MPa s,1 were applied. The inert strength values were calculated with the use of Weibull statistics and maximum-likelihood approaches for the Weibull parameter m. Strength,probability,time (SPT) diagrams were derived for both materials. The alumina glass composite showed a high fracture strength ,0 (442 MPa) at a failure probability of PF = 63.2% and a high resistance against subcritical crack growth (n = 36.5). The development of strength under fatigue conditions was calculated for exemplary 1 year. The strength of the alumina glass material dropped to 228 MPa within this period. This fact is due to a low content of infiltrated lanthanum glass phase in the composite material (25 wt%). In contrast, for the high-silica-glass-containing porcelain a distinct decrease of strength ,0 from initial 133 to 47 MPa after 1 year was predicted. This, mainly because of a low crack growth resistance (n = 16.8) of the feldspathic porcelain. Much lower strength values were calculated, assuming a failure probability of PF = 5%. The decrease is mainly caused by the sensitivity of high,glass-containing ceramics against water corrosion. © 2002 Wiley Periodicals, Inc. J Biomed Mater Res (Appl Biomater) 63: 780,785, 2002 [source]


Disruption-management strategies for short life-cycle products

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2009
Brian Tomlin
Abstract Supplier diversification, contingent sourcing, and demand switching (whereby a firm shifts customers to a different product if their preferred product is unavailable), are key building blocks of a disruption-management strategy for firms that sell multiple products over a single season. In this article, we evaluate 12 possible disruption-management strategies (combinations of the basic building-block tactics) in the context of a two-product newsvendor. We investigate the influence of nine attributes of the firm, its supplier(s), and its products on the firs preference for the various strategies. These attributes include supplier reliability, supplier failure correlation, payment responsibility in the event of a supply failure, product contribution margin, product substitutability, demand uncertainties and correlation, and the decision makes risk aversion. Our results show that contingent sourcing is preferred to supplier diversification as the supply risk (failure probability) increases, but diversification is preferred to contingent sourcing as the demand risk (demand uncertainty) increases. We find that demand switching is not effective at managing supply risk if the products are sourced from the same set of suppliers. Demand switching is effective at managing demand risk and so can be preferred to the other tactics if supply risk is low. Risk aversion makes contingent sourcing preferable over a wider set of supply and demand-risk combinations. We also find a two-tactic strategy provides almost the same benefit as a three-tactic strategy for most reasonable supply and demand-risk combinations. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009 [source]


Reliability of computational science

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2007
I. Babu
Abstract Today's computers allow us to simulate large, complex physical problems. Many times the mathematical models describing such problems are based on a relatively small amount of available information such as experimental measurements. The question arises whether the computed data could be used as the basis for decision in critical engineering, economic, and medicine applications. The representative list of engineering accidents occurred in the past years and their reasons illustrate the question. The paper describes a general framework for verification and validation (V&V) which deals with this question. The framework is then applied to an illustrative engineering problem, in which the basis for decision is a specific quantity of interest, namely the probability that the quantity does not exceed a given value. The V&V framework is applied and explained in detail. The result of the analysis is the computation of the failure probability as well as a quantification of the confidence in the computation, depending on the amount of available experimental data. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 23: 753,784, 2007 [source]


Routing complexity of faulty networks

RANDOM STRUCTURES AND ALGORITHMS, Issue 1 2008
Omer Angel
Abstract One of the fundamental problems in distributed computing is how to efficiently perform routing in a faulty network in which each link fails with some probability. This article investigates how big the failure probability can be, before the capability to efficiently find a path in the network is lost. Our main results show tight upper and lower bounds for the failure probability, which permits routing both for the hypercube and for the d -dimensional mesh. We use tools from percolation theory to show that in the d -dimensional mesh, once a giant component appears,efficient routing is possible. A different behavior is observed when the hypercube is considered. In the hypercube there is a range of failure probabilities in which short paths exist with high probability, yet finding them must involve querying essentially the entire network. Thus the routing complexity of the hypercube shows an asymptotic phase transition. The critical probability with respect to routing complexity lies in a different location than that of the critical probability with respect to connectivity. Finally we show that an oracle access to links (as opposed to local routing) may reduce significantly the complexity of the routing problem. We demonstrate this fact by providing tight upper and lower bounds for the complexity of routing in the random graph Gn,p. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source]