Home About us Contact | |||
Output Data (output + data)
Selected AbstractsArtificial neural network modeling of O2 separation from air in a hollow fiber membrane moduleASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2008S. S. Madaeni Abstract In this study artificial neural network (ANN) modeling of a hollow fiber membrane module for separation of oxygen from air was conducted. Feed rates, transmembrane pressure, membrane surface area, and membrane permeability for the present constituents in the feed were network input data. Output data were rate of permeate from the membrane, the amount of N2 in the remaining flow, and the amount of O2 in the permeate flow. Experimental data were obtained from software developed by Research Institute of Petroleum Industry (RIPI). A part of the data generated by this software was confirmed by experimental results available in literature. Two third of the data were employed for training ANNs. Based on different training algorithms, radial basis function (RBF) was found as the best network with minimum training error. Generalization capability of best RBF networks was checked by one third of unseen data. The network was able to properly predict new data that incorporate excellent performance of the network. The developed model can be used for optimization and online control. Copyright © 2008 Curtin University of Technology and John Wiley & Sons, Ltd. [source] Interactive Web-based package for computer-aided learning of structural behaviorCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2002X. F. Yuan Abstract This paper presents an innovative Web-based package named CALSB for computer-aided learning of structural behavior. The package was designed to be widely accessible through the Internet, user-friendly by the automation of many input functions and the extensive use of cursor movements, and dynamically interactive by linking all input and output data to a single graphical display on the screen. The package includes an analysis engine based on the matrix stiffness method, so the response of any two-dimensional skeletal structure can be predicted and graphically displayed. The package thus provides a virtual laboratory environment in which the user can "build and test" two-dimensional skeletal structures of unlimited choices to enhance his understanding of structural behavior. In addition, the package includes two other innovative features, structural games and paradoxes. The structural games in the package represent perhaps the first attempt at intentionally combining the learning of structural behavior with joy and excitement, while the structural paradoxes provide a stimulating environment conducive for the development of creative problem solving skills of the user. © 2002 Wiley Periodicals, Inc. Comput Appl Eng Educ 10: 121,136, 2002; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.10020 [source] Optimization of integrated Earth System Model components using Grid-enabled data management and computationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2007A. R. Price Abstract In this paper, we present the Grid enabled data management system that has been deployed for the Grid ENabled Integrated Earth system model (GENIE) project. The database system is an augmented version of the Geodise Database Toolbox and provides a repository for scripts, binaries and output data in the GENIE framework. By exploiting the functionality available in the Geodise toolboxes we demonstrate how the database can be employed to tune parameters of coupled GENIE Earth System Model components to improve their match with observational data. A Matlab client provides a common environment for the project Virtual Organization and allows the scripting of bespoke tuning studies that can exploit multiple heterogeneous computational resources. We present the results of a number of tuning exercises performed on GENIE model components using multi-dimensional optimization methods. In particular, we find that it is possible to successfully tune models with up to 30 free parameters using Kriging and Genetic Algorithm methods. Copyright © 2006 John Wiley & Sons, Ltd. [source] Mathematical Modelling and Simulation of Polymer Electrolyte Membrane Fuel Cells.FUEL CELLS, Issue 2 2002Part I: Model Structures, Solving an Isothermal One-Cell Model Abstract Amongst the various types of fuel cells, the polymer electrolyte membrane fuel cell (PEM-FC) can be used favourably in vehicles and for in house energy supply. The focus of the development of these cells is not only to provide cost-effective membranes and electrodes, but also to optimise the process engineering for single cells and to design multi-cell systems (cell stacks). This is a field in which we have successfully applied the methods of mathematical modelling and simulation. Initially, in this work, a partial model of a single membrane-electrode unit was developed in which the normal reaction technology fields (concentration, temperature, and flow-speed distributions) were calculated, but also the electrical potential and current density distribution in order to develop model structures for technically interesting PEM-FC. This allows the simulation of the effects that the geometric parameters (electrode and membrane data and the dimensions of the material feed and outlet channels) and the educt and coolant intake data have on the electrical and thermal output data of the cell. When complete, cell stacks consisting of a number of single cells, most of which have bipolar switching, are modelled the distribution of the gas flows over the single cells and the specific conditions of heat dissipation must also be taken into consideration. In addition to the distributions mentioned above, this simulation also produces characteristic current-voltage and power-voltage curves for each application that can be compared with the individual process variations and cell types, thus making it possible to evaluate them both technically and economically. The results of the simulation of characteristic process conditions of a PEM-FC operated on a semi-technical scale are presented, which have been determined by means of a three-dimensional model. The distributions of the electrical current density and all component voltage drops that are important for optimising the conditions of the process are determined and also the water concentration in the membrane as an important factor that influences the cell's momentary output and the PEM-FC's long-term stability. [source] MODFLOW 2000 Head Uncertainty, a First-Order Second Moment MethodGROUND WATER, Issue 3 2003Harry S. Glasgow A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MOD-FLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmis-sivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (trans-missivity and recharge). [source] Differences in Labor versus Value Chain Industry Clusters: An Empirical InvestigationGROWTH AND CHANGE, Issue 3 2007HENRY RENSKI ABSTRACT Regional analysts often identify industry clusters according to a single dimension of industrial interdependence, typically by trading patterns as revealed in national or regionalized input,output data. This is despite the fact that the theory underpinning regional industry cluster applications draws heavily on Marshall's theory of external economies, including the important role of labor pooling economies and knowledge spillovers in addition to spatially co-located suppliers. This article investigates whether industry clusters identified based on trading relationships (value chain clusters) are meaningfully different in industrial composition and geography than those derived from an analysis of occupational employment requirements (labor-based clusters). The results suggest that value chain linkages are a weak proxy for shared labor requirements, and vice versa. [source] Comparison of non-local and polar modelling of softening in hypoplasticityINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2004Th. Maier Abstract The paper deals with the comparison of a non-local and a polar (Cosserat) hypoplastic model. The hypoplastic constitutive law in the version of von Wolffersdorff is chosen as local reference model. For the comparison the results of biaxial tests on dense Hostun Rf sand are predicted with both enhanced models. The comparison is based on a strict separation of input data from triaxial- and oedometer tests and output data from biaxial tests. The comparison is drawn in terms of the shear band width, the load,displacement curves and the influence of the pressure level. Finally, the non-local and the polar hypoplastic model are applied on the strip foundation problem. Copyright © 2004 John Wiley & Sons, Ltd. [source] Least-squares parameter estimation for systems with irregularly missing dataINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2010Feng Ding Abstract This paper considers the problems of parameter identification and output estimation with possibly irregularly missing output data, using output error models. By means of an auxiliary model (or reference model) approach, we present a recursive least-squares algorithm to estimate the parameters of missing data systems, and establish convergence properties for the parameter and missing output estimation in the stochastic framework. The basic idea is to replace the unmeasurable inner variables with the output of an auxiliary model. Finally, we test the effectiveness of the algorithm with an example system. Copyright © 2009 John Wiley & Sons, Ltd. [source] Parameter optimization for a PEMFC model with a hybrid genetic algorithmINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 8 2006Zhi-Jun Mo Abstract Many steady-state models of polymer electrolyte membrane fuel cells (PEMFC) have been developed and published in recent years. However, models which are easy to be solved and feasible for engineering applications are few. Moreover, rarely the methods for parameter optimization of PEMFC stack models were discussed. In this paper, an electrochemical-based fuel cell model suitable for engineering optimization is presented. Parameters of this PEMFC model are determined and optimized by means of a niche hybrid genetic algorithm (HGA) by using stack output-voltage, stack demand current, anode pressure and cathode pressure as input,output data. This genetic algorithm is a modified method for global optimization. It provides a new architecture of hybrid algorithms, which organically merges the niche techniques and Nelder,Mead's simplex method into genetic algorithms (GAs). Calculation results of this PEMFC model with optimized parameters agreed with experimental data well and show that this model can be used for the study on the PEMFC steady-state performance, is broader in applicability than the earlier steady-state models. HGA is an effective and reliable technique for optimizing the model parameters of PEMFC stack. Copyright © 2005 John Wiley & Sons, Ltd. [source] Learning weighted linguistic rules to control an autonomous robotINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 3 2009M. Mucientes A methodology for learning behaviors in mobile robotics has been developed. It consists of a technique to automatically generate input,output data plus a genetic fuzzy system that obtains cooperative weighted rules. The advantages of our methodology over other approaches are that the designer has to choose the values of only a few parameters, the obtained controllers are general (the quality of the controller does not depend on the environment), and the learning process takes place in simulation, but the controllers work also on the real robot with good performance. The methodology has been used to learn the wall-following behavior, and the obtained controller has been tested using a Nomad 200 robot in both simulated and real environments. © 2009 Wiley Periodicals, Inc. [source] Closed-loop identification of the time-varying dynamics of variable-speed wind turbinesINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2009J. W. van Wingerden Abstract The trend with offshore wind turbines is to increase the rotor diameter as much as possible to decrease the costs per kWh. The increasing dimensions have led to the relative increase in the loads on the wind turbine structure. Because of the increasing rotor size and the spatial load variations along the blade, it is necessary to react to turbulence in a more detailed way: each blade separately and at several separate radial distances. This combined with the strong nonlinear behavior of wind turbines motivates the need for accurate linear parameter-varying (LPV) models for which advanced control synthesis techniques exist within the robust control framework. In this paper we present a closed-loop LPV identification algorithm that uses dedicated scheduling sequences to identify the rotational dynamics of a wind turbine. We assume that the system undergoes the same time variation several times, which makes it possible to use time-invariant identification methods as the input and the output data are chosen from the same point in the variation of the system. We use time-invariant techniques to identify a number of extended observability matrices and state sequences that are inherent to subspace identification identified in a different state basis. We show that by formulating an intersection problem all states can be reconstructed in a general state basis from which the system matrices can be estimated. The novel algorithm is applied on a wind turbine model operating in closed loop. Copyright © 2008 John Wiley & Sons, Ltd. [source] Reduction and identification methods for Markovian control systems, with application to thin film depositionINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2 2004Martha A. Gallivan Abstract Dynamic models of nanometer-scale phenomena often require an explicit consideration of interactions among a large number of atoms or molecules. The corresponding mathematical representation may thus be high dimensional, nonlinear, and stochastic, incompatible with tools in nonlinear control theory that are designed for low-dimensional deterministic equations. We consider here a general class of probabilistic systems that are linear in the state, but whose input enters as a function multiplying the state vector. Model reduction is accomplished by grouping probabilities that evolve together, and truncating states that are unlikely to be accessed. An error bound for this reduction is also derived. A system identification approach that exploits the inherent linearity is then developed, which generates all coefficients in either a full or reduced model. These concepts are then extended to extremely high-dimensional systems, in which kinetic Monte Carlo (KMC) simulations provide the input,output data. This work was motivated by our interest in thin film deposition. We demonstrate the approaches developed in the paper on a KMC simulation of surface evolution during film growth, and use the reduced model to compute optimal temperature profiles that minimize surface roughness. Copyright © 2004 John Wiley & Sons, Ltd. [source] On the sample-complexity of ,, identificationINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2001S. R. Venkatesh Abstract In this paper we derive the sample complexity for discrete time linear time-invariant stable systems described in the ,, topology. The problem set-up is as follows: the ,, norm distance between the unknown real system and a known finitely parameterized family of systems is bounded by a known real number. We can associate, for every feasible real system, a model in the finitely parameterized family that minimizes the ,, distance. The question now arises as to how long a data record is required to identify such a model from noisy input,output data. This question has been addressed in the context of l1, ,2 and several other topologies, and it has been shown that the sample-complexity is polynomial. Nevertheless, it turns out that for the ,, topology the sample-complexity in the worst case can be infinite. Copyright © 2001 John Wiley & Sons, Ltd. [source] A Bayesian online inferential model for evaluation of analyzer performanceJOURNAL OF CHEMOMETRICS, Issue 2 2005A. J. Willis Abstract An iterative Bayesian approach is developed for the inversion of flow instrumentation condition-monitoring problems. For the case of Gaussian random variables the solution reduces to an iterative weighted least squares approach amenable to online implementation, with a weighting derived from the Bayesian prior. The algorithm is illustrated with reference to a Sulfreen unit in a refinery, where concentrations of H2S and SO2 are measured by a number of input analyzers in parallel, prior to their combination and reaction. This paper discusses approaches to evaluating the performance of each instrument separately by monitoring the inferred bias using output data from the process. Copyright © 2005 John Wiley & Sons, Ltd. [source] Effect of input excitation on the quality of empirical dynamic models for type 1 diabetesAICHE JOURNAL, Issue 5 2009Daniel A. Finan Abstract Accurate prediction of future blood glucose trends has the potential to significantly improve glycemic regulation in type 1 diabetes patients. A model-based controller for an artificial ,-cell, for example, would determine the most efficacious insulin dose for the current sampling interval given available input,output data and model predictions of the resultant glucose trajectory. The two inputs most influential to the glucose concentration are bolused insulin and meal carbohydrates, which in practice are often taken simultaneously and in a specified ratio. This linear dependence has adverse effects on the quality of linear dynamic models identified from such data. On the other hand, inputs with greater degrees of excitation may force the subject into extreme hypoglycemia or hyperglycemia, and thus may be clinically unacceptable. Inputs with good excitation that do not endanger the subject are shown to result in models that can predict glucose trends reasonably accurately, 1,2 h ahead. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Semigroup approach for identification of the unknown diffusion coefficient in a quasi-linear parabolic equationMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 11 2007Ali Demir Abstract This article presents a semigroup approach for the mathematical analysis of the inverse coefficient problems of identifying the unknown coefficient k(u(x,t)) in the quasi-linear parabolic equation ut(x,t)=(k(u(x,t))ux(x,t))x, with Dirichlet boundary conditions u(0,t)=,0, u(1,t)=,1. The main purpose of this paper is to investigate the distinguishability of the input,output mappings ,[,]:,, ,C1[0,T], ,[,]:,,,C1[0,T] via semigroup theory. In this paper, it is shown that if the null space of the semigroup T(t) consists of only zero function, then the input,output mappings ,[,] and ,[,] have the distinguishability property. It is also shown that the types of the boundary conditions and the region on which the problem is defined play an important role in the distinguishability property of these mappings. Moreover, under the light of measured output data (boundary observations) f(t):=k(u(0,t))ux(0,t) or/and h(t):=k(u(1,t))ux(1,t), the values k(,0) and k(,1) of the unknown diffusion coefficient k(u(x,t)) at (x,t)=(0,0) and (x,t)=(1,0), respectively, can be determined explicitly. In addition to these, the values ku(,0) and ku(,1) of the unknown coefficient k(u(x,t)) at (x,t)=(0,0) and (x,t)=(1,0), respectively, are also determined via the input data. Furthermore, it is shown that measured output dataf(t) and h(t) can be determined analytically by an integral representation. Hence the input,output mappings ,[,]:,,, C1[0,T], ,[,]:,,,C1[0,T] are given explicitly in terms of the semigroup. Copyright © 2007 John Wiley & Sons, Ltd. [source] Identification and fine tuning of closed-loop processes under discrete EWMA and PI adjustmentsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2001Rong Pan Abstract Conventional process identification techniques of a open-loop process use the cross-correlation function between historical values of the process input and of the process output. If the process is operated under a linear feedback controller, however, the cross-correlation function has no information on the process transfer function because of the linear dependency of the process input on the output. In this paper, several circumstances where a closed-loop system can be identified by the autocorrelation function of the output are discussed. It is assumed that a proportional integral controller with known parameters is acting on the process while the output data were collected. The disturbance is assumed to be a member of a simple yet useful family of stochastic models, which is able to represent drift. It is shown that, with these general assumptions, it is possible to identify some dynamic process models commonly encountered in manufacturing. After identification, our approach suggests to tune the controller to a near-optimal setting according to a well-known performance criterion. Copyright © 2001 John Wiley & Sons, Ltd. [source] Application of support vector regression for developing soft sensors for nonlinear processes,THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2010Saneej B. Chitralekha Abstract The field of soft sensor development has gained significant importance in the recent past with the development of efficient and easily employable computational tools for this purpose. The basic idea is to convert the information contained in the input,output data collected from the process into a mathematical model. Such a mathematical model can be used as a cost efficient substitute for hardware sensors. The Support Vector Regression (SVR) tool is one such computational tool that has recently received much attention in the system identification literature, especially because of its successes in building nonlinear blackbox models. The main feature of the algorithm is the use of a nonlinear kernel transformation to map the input variables into a feature space so that their relationship with the output variable becomes linear in the transformed space. This method has excellent generalisation capabilities to high-dimensional nonlinear problems due to the use of functions such as the radial basis functions which have good approximation capabilities as kernels. Another attractive feature of the method is its convex optimization formulation which eradicates the problem of local minima while identifying the nonlinear models. In this work, we demonstrate the application of SVR as an efficient and easy-to-use tool for developing soft sensors for nonlinear processes. In an industrial case study, we illustrate the development of a steady-state Melt Index soft sensor for an industrial scale ethylene vinyl acetate (EVA) polymer extrusion process using SVR. The SVR-based soft sensor, valid over a wide range of melt indices, outperformed the existing nonlinear least-square-based soft sensor in terms of lower prediction errors. In the remaining two other case studies, we demonstrate the application of SVR for developing soft sensors in the form of dynamic models for two nonlinear processes: a simulated pH neutralisation process and a laboratory scale twin screw polymer extrusion process. A heuristic procedure is proposed for developing a dynamic nonlinear-ARX model-based soft sensor using SVR, in which the optimal delay and orders are automatically arrived at using the input,output data. Le domaine du développement des capteurs logiciels a récemment gagné en importance avec la création d'outils de calcul efficaces et facilement utilisables à cette fin. L'idée de base est de convertir l'information obtenue dans les données d'entrée et de sortie recueillies à partir du processus dans un modèle mathématique. Un tel modèle mathématique peut servir de solution de rechange économique pour les capteurs matériels. L'outil de régression par machine à vecteur de support (RMVS) constitue un outil de calcul qui a récemment été l'objet de beaucoup d'attention dans la littérature des systèmes d'identification, surtout en raison de ses succès dans la création de modèles de boîte noire non linéaires. Dans ce travail, nous démontrons l'application de la RMVS comme outil efficace et facile à utiliser pour la création de capteurs logiciels pour les procédés non linéaires. Dans une étude de cas industrielle, nous illustrons le développement d'un capteur logiciel à indice de fluidité à état permanent pour un processus d'extrusion du polymère d'acétate de vinyle-éthylène à l'échelle industrielle en utilisant la RMVS. Le capteur logiciel fondé sur la RMVS, valide sur une vaste gamme d'indices de fluidité, a surclassé le capteur logiciel fondé sur les moindres carrés non linéaires existant en matière d'erreurs de prédiction plus faibles. Dans les deux autres études de cas, nous démontrons l'application de la RMVS pour la création de capteurs logiciels sous la forme de modèles dynamiques pour deux procédés non linéaires: un processus de neutralisation du pH simulé et un processus d'extrusion de polymère à deux vis à l'échelle laboratoire. Une procédure heuristique est proposée pour la création d'un capteur logiciel fondé sur un modèle ARX non linéaire dynamique en utilisant la RMVS, dans lequel on atteint automatiquement le délai optimal et les ordres en utilisant les données d'entrée et de sortie. [source] An integrated approach to optimization of Escherichia coli fermentations using historical dataBIOTECHNOLOGY & BIOENGINEERING, Issue 3 2003Matthew C. Coleman Abstract Using a fermentation database for Escherichia coli producing green fluorescent protein (GFP), we have implemented a novel three-step optimization method to identify the process input variables most important in modeling the fermentation, as well as the values of those critical input variables that result in an increase in the desired output. In the first step of this algorithm, we use either decision-tree analysis (DTA) or information theoretic subset selection (ITSS) as a database mining technique to identify which process input variables best classify each of the process outputs (maximum cell concentration, maximum product concentration, and productivity) monitored in the experimental fermentations. The second step of the optimization method is to train an artificial neural network (ANN) model of the process input,output data, using the critical inputs identified in the first step. Finally, a hybrid genetic algorithm (hybrid GA), which includes both gradient and stochastic search methods, is used to identify the maximum output modeled by the ANN and the values of the input conditions that result in that maximum. The results of the database mining techniques are compared, both in terms of the inputs selected and the subsequent ANN performance. For the E. coli process used in this study, we identified 6 inputs from the original 13 that resulted in an ANN that best modeled the GFP fluorescence outputs of an independent test set. Values of the six inputs that resulted in a modeled maximum fluorescence were identified by applying a hybrid GA to the ANN model developed. When these conditions were tested in laboratory fermentors, an actual maximum fluorescence of 2.16E6 AU was obtained. The previous high value of fluorescence that was observed was 1.51E6 AU. Thus, this input condition set that was suggested by implementing the proposed optimization scheme on the available historical database increased the maximum fluorescence by 55%. © 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 84: 274,285, 2003. [source] The impact of loads on standard diameter, small diameter and mini implants: a comparative laboratory studyCLINICAL ORAL IMPLANTS RESEARCH, Issue 6 2008Simon Rupert Allum Abstract Objectives: While caution in the use of small-diameter (,3.5 mm) implants has been advocated in view of an increased risk of fatigue fracture under clinical loading conditions, a variety of implant designs with diameters <3 mm are currently offered in the market for reconstructions including fixed restorations. There is an absence of reported laboratory studies and randomized-controlled clinical trials to demonstrate clinical efficacy for implant designs with small diameters. This laboratory study aimed to provide comparative data on the mechanical performance of a number of narrow commercially marketed implants. Materials and methods: Implants of varying designs were investigated under a standardized test set-up similar to that recommended for standardized ISO laboratory testing. Implant assemblies were mounted in acrylic blocks supporting laboratory cast crowns and subjected to 30° off-axis loading on an LRX Tensometer. Continuous output data were collected using Nexygen software. Results: Load/displacement curves demonstrated good grouping of samples for each design with elastic deformation up to a point of failure approximating the maximum load value for each sample. The maximum loads for Straumann (control) implants were 989 N (±107 N) for the 4.1 mm RN design, and 619 N (±50 N) for the 3.3 mm RN implant (an implant known to have a risk of fracture in clinical use). Values for mini implants were recorded as 261 N (±31 N) for the HiTec 2.4 mm implant, 237 N (±37 N) for the Osteocare 2.8 mm mini and 147 N (±25 N) for the Osteocare mini design. Other implant designs were also tested. Conclusions: The diameters of the commercially available implants tested demonstrated a major impact on their ability to withstand load, with those below 3 mm diameter yielding results significantly below a value representing a risk of fracture in clinical practice. The results therefore advocate caution when considering the applicability of implants ,3 mm diameter. Standardized fatigue testing is recommended for all commercially available implants. [source] |