Home About us Contact | |||
System Being (system + being)
Selected AbstractsEXPLICATION, EXPLANATION, AND HISTORYHISTORY AND THEORY, Issue 2 2008CARL HAMMER ABSTRACT To date, no satisfactory account of the connection between natural-scientific and historical explanation has been given, and philosophers seem to have largely given up on the problem. This paper is an attempt to resolve this old issue and to sort out and clarify some areas of historical explanation by developing and applying a method that will be called "pragmatic explication" involving the construction of definitions that are justified on pragmatic grounds. Explanations in general can be divided into "dynamic" and "static" explanations, which are those that essentially require relations across time and those that do not, respectively. The problem of assimilating historical explanations concerns dynamic explanation, so a general analysis of dynamic explanation that captures both the structure of natural-scientific and historical explanation is offered. This is done in three stages: In the first stage, pragmatic explication is introduced and compared to other philosophical methods of explication. In the second stage pragmatic explication is used to tie together a series of definitions that are introduced in order to establish an account of explanation. This involves an investigation of the conditions that play the role in historiography that laws and statistical regularities play in the natural sciences. The essay argues that in the natural sciences, as well as in history, the model of explanation presented represents the aims and overarching structure of actual causal explanations offered in those disciplines. In the third stage the system arrived at in the preceding stage is filled in with conditions available to and relevant for historical inquiry. Further, the nature and treatment of causes in history and everyday life are explored and related to the system being proposed. This in turn makes room for a view connecting aspects of historical explanation and what we generally take to be causal relations. [source] An analysis and comparison of the time accuracy of fractional-step methods for the Navier,Stokes equations on staggered gridsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2002S. Armfield Abstract Fractional-step methods solve the unsteady Navier,Stokes equations in a segregated manner, and can be implemented with only a single solution of the momentum/pressure equations being obtained at each time step, or with the momentum/pressure system being iterated until a convergence criterion is attained. The time accuracy of such methods can be determined by the accuracy of the momentum/pressure coupling, irrespective of the accuracy to which the momentum equations are solved. It is shown that the time accuracy of the basic projection method is first-order as a result of the momentum/pressure coupling, but that by modifying the coupling directly, or by modifying the intermediate velocity boundary conditions, it is possible to recover second-order behaviour. It is also shown that pressure correction methods, implemented in non-iterative or iterative form and without special boundary conditions, are second-order in time, and that a form of the non-iterative pressure correction method is the most efficient for the problems considered. Copyright © 2002 John Wiley & Sons, Ltd. [source] Intelligent control using multiple neural networksINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2003Lingji Chen Abstract In this paper a framework for intelligent control is established to adaptively control a class of non-linear discrete-time dynamical systems while assuring boundedness of signals. A linear robust adaptive controller and multiple non-linear neural network based adaptive controllers are used, and a switching law is suitably defined to switch between them, based upon their performances in predicting the plant output. Boundedness of signals is established with minimum requirements on the parameter adjustment mechanisms of the neural network controllers, and thus the latter can be used in novel ways to better detect changes in the system being controlled, and to initiate fast adaptation. Simulation studies show the effectiveness of the proposed approach. Copyright © 2003 John Wiley & Sons, Ltd. [source] Navigation Aided Image Processing in UAV Surveillance: Preliminary Results and Design of an Airborne Experimental SystemJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 2 2004Jonas Nygårds This paper describes an airborne reconfigurable measurement system being developed at Swedish Defence Research Agency (FOI), Sensor Technology, Sweden. An image processing oriented sensor management architecture for UAV (unmanned aerial vehicles) IR/EO-surveillance is presented. Some preliminary results of navigation aided image processing in UAV applications are demonstrated, such as SLAM (simultaneous localization and mapping), structure from motion and geolocation, target tracking, and detection of moving objects. The design goal of the measurement system is to emulate a UAV-mounted sensor gimbal using a stand-alone system. The minimal configuration of the system consists of a gyro-stabilized gimbal with IR and CCD sensors and an integrated high-performance navigation system. The navigation system combines dGPS real-time kinematics (RTK) data with data from an inertial measurement unit (IMU) mounted with reference to the optical sensors. The gimbal is to be used as an experimental georeferenced sensor platform, using a choice of carriers, to produce military relevant image sequences for studies of image processing and sensor control on moving surveillance and reconnaissance platforms. Furthermore, a high resolution synthetic environment, developed for sensor simulations in the visual and infrared wavelengths, is presented. © 2004 Wiley Periodicals, Inc. [source] A multi-QMOM framework to describe multi-component agglomerates in liquid steelAICHE JOURNAL, Issue 9 2010L. Claudotte Abstract A variant of the quadrature method of moments (QMOM) for solving multiple population balance equations (PBE) is developed with the objective of application to steel industry processing. During the process of oxygen removal in a steel ladle, a large panel of oxide inclusions may be observed depending on the type of oxygen removal and addition elements. The final quality of the steel can be improved by accurate numerical simulation of the multi-component precipitation. The model proposed in this article takes into account the interactions between three major aspects of steelmaking modeling, namely fluid dynamics, thermo-kinetics and population balance. A commercial CFD code is used to predict the liquid steel hydrodynamics, whereas a home-made thermo-kinetic code adjusts chemical composition with nucleation and diffusion growth, and finally a set of PBE tracks the evolution of inclusion size with emphasis on particle aggregation. Each PBE is solved by QMOM, the first PBE/QMOM system describing the clusters and each remaining PBE/QMOM system being dedicated to the elementary particles of each inclusion species. It is shown how this coupled model can be used to investigate the cluster size and composition of a particular grade of steel (i.e., Fe-Al-Ti-O). © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source] SOFTWARE ENGINEERING CONSIDERATIONS FOR INDIVIDUAL-BASED MODELSNATURAL RESOURCE MODELING, Issue 1 2002GLEN E. ROPELLA ABSTRACT. Software design is much more important for individual-based models (IBMs) than it is for conventional models, for three reasons. First, the results of an IBM are the emergent properties of a system of interacting agents that exist only in the software; unlike analytical model results, an IBMs outcomes can be reproduced only by exactly reproducing its software implementation. Second, outcomes of an IBM are expected to be complex and novel, making software errors difficult to identify. Third, an IBM needs ,systems software' that manages populations of multiple kinds of agents, often has nonlinear and multi-threaded process control and simulates a wide range of physical and biological processes. General software guidelines for complex models are especially important for IBMs. (1) Have code critically reviewed by several people. (2) Follow prudent release management prac-tices, keeping careful control over the software as changes are implemented. (3) Develop multiple representations of the model and its software; diagrams and written descriptions of code aid design and understanding. (4) Use appropriate and widespread software tools which provide numerous major benefits; coding ,from scratch' is rarely appropriate. (5) Test the software continually, following a planned, multi-level, exper-imental strategy. (6) Provide tools for thorough, pervasive validation and verification. (7) Pay attention to how pseudorandom numbers are generated and used. Additional guidelines for IBMs include: (a) design the model's organization before starting to write code,(b) provide the ability to observe all parts of the model from the beginning,(c) make an extensive effort to understand how the model executes how often different pieces of code are called by which objects, and (d) design the software to resemble the system being mod-eled, which helps maintain an understanding of the software. Strategies for meeting these guidelines include planning adequate resources for software development, using software professionals to implement models and using tools like Swarm that are designed specifically for IBMs. [source] Development of a FTA versus Parts Count Method Model: Comparative FTAQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2003G. Arcidiacono Abstract This study adopts a special Fault Tree Analysis (FTA) method called Comparative FTA to compare the reliability of an electronic braking system with its mechanical counterpart. To this end two Top Events, ,Ineffective parking braking' and ,Wheels jamming during emergency braking', were analysed. One of the limitations of classic FTA is that the setting up of the tree diagram requires the long-term involvement,one to two months according to Fiat Auto,of specialists of the system being studied. For this reason, therefore, when dealing with relatively complex systems, classic FTA is only used when safety is involved. This paper introduces a simplified FTA model based on the same principle as the Parts Count Method, which limits its attention to the new branches, thereby avoiding the study of all the branches of the tree, in order to make FTA management easier and to encourage its use. The probability that a Top Event takes place is therefore evaluated by studying the different causes which diversify the solutions considered. This approach is a lean practice to minimize the resources and the time of the analysis. It has guaranteed very satisfactory results and, therefore, Fiat Auto has introduced the practice in their Corporate Instructions. Copyright © 2003 John Wiley & Sons, Ltd. [source] AMERICAN SOCIETY OF ANESTHESIOLOGISTS CLASSIFICATION OF PHYSICAL STATUS AS A PREDICTOR OF WOUND INFECTIONANZ JOURNAL OF SURGERY, Issue 9 2007John C. Woodfield Background: Wound infection occurs when bacterial contamination overcomes the hosts' defences against bacterial growth. Wound categories are a measurement of wound contamination. The American Society of Anesthesiologists (ASA) classification of physical status may be an effective indirect measurement of the hosts' defence against infection. This study examines the association between the ASA score of physical status and wound infection. Methods: A retrospective review of a prospective study of antibiotic prophylaxis was carried out. Patients with a documented ASA score who received optimal prophylactic antibiotics were included. The anaesthetist scored the ASA classification of physical status in theatre. Other risk factors for wound infection were also documented. Patients were assessed up to 30 days postoperatively. Results: Of 1013 patients there were 483 with a documented ASA score. One hundred and one may not have received optimal prophylaxis, leaving a database of 382 patients. There were 36 wound infections (9.4%). Both the ASA classification of physical status (P = 0.002) and the wound categories (P = 0.034) significantly predicted wound infection. The duration of surgery, patient's age, acuteness of surgery and the organ system being operated on did not predict wound infection. On logistic regression analysis the ASA score was the strongest predictor of wound infection. Conclusion: When effective prophylactic antibiotics were used the ASA classification of physical status was the most significant predictor of wound infection. [source] Remote visualisation of Labrador convection in large oceanic datasetsATMOSPHERIC SCIENCE LETTERS, Issue 4 2005L. J. West Abstract The oceans relinquish O(1PW) of heat into the atmosphere at high latitudes, the lion's share of which originates in localised ,hotspots' of violent convective mixing, but despite their small horizontal scale,O(10,100km),these features may penetrate deeply into the thermocline and are vital in maintaining the Atlantic Meridional Overturning Circulation (MOC). Accurate modelling of the MOC, therefore, requires a large-scale numerical model with very fine resolution. The global high-resolution ocean model, Ocean Circulation Climate Advanced Model (OCCAM) has been developed and run at the Southampton Oceanography Centre (SOC) for many years. It was configured to resolve the energetic scales of oceanic motions, and its output is stored at the Manchester Supercomputer Centre. Although this community resource represents a treasure trove of potential new insights into the nature of the world ocean, it remains relatively unexploited for a number of reasons, not the least of which is its sheer size. A system being developed at SOC under the auspices of the Grid for Ocean Diagnostics, Interactive Visualisation and Analysis (GODIVA) project makes the remote visualisation of very large volumes of data on modest hardware (e.g. a laptop with no special graphics capability) a present reality. The GODIVA system is enabling the unresolved question of oceanic convection and its relationship to large-scale flows to be investigated; a question that lies at the heart of many current climate change issues. In this article, one aspect of the GODIVA is presented, and used to locate and visualise regions of convective mixing in the OCCAM Labrador Sea. Copyright © 2006 Royal Meteorological Society [source] Area under the Free-Response ROC Curve (FROC) and a Related Summary IndexBIOMETRICS, Issue 1 2009Andriy I. Bandos Summary Free-response assessment of diagnostic systems continues to gain acceptance in areas related to the detection, localization, and classification of one or more "abnormalities" within a subject. A free-response receiver operating characteristic (FROC) curve is a tool for characterizing the performance of a free-response system at all decision thresholds simultaneously. Although the importance of a single index summarizing the entire curve over all decision thresholds is well recognized in ROC analysis (e.g., area under the ROC curve), currently there is no widely accepted summary of a system being evaluated under the FROC paradigm. In this article, we propose a new index of the free-response performance at all decision thresholds simultaneously, and develop a nonparametric method for its analysis. Algebraically, the proposed summary index is the area under the empirical FROC curve penalized for the number of erroneous marks, rewarded for the fraction of detected abnormalities, and adjusted for the effect of the target size (or "acceptance radius"). Geometrically, the proposed index can be interpreted as a measure of average performance superiority over an artificial "guessing" free-response process and it represents an analogy to the area between the ROC curve and the "guessing" or diagonal line. We derive the ideal bootstrap estimator of the variance, which can be used for a resampling-free construction of asymptotic bootstrap confidence intervals and for sample size estimation using standard expressions. The proposed procedure is free from any parametric assumptions and does not require an assumption of independence of observations within a subject. We provide an example with a dataset sampled from a diagnostic imaging study and conduct simulations that demonstrate the appropriateness of the developed procedure for the considered sample sizes and ranges of parameters. [source] Development of a Segmented Model for a Continuous Electrophoretic Moving Bed Enantiomer SeparationBIOTECHNOLOGY PROGRESS, Issue 6 2003Brian M. Thome With the recent demonstration of a continuous electrophoretic "moving bed" enantiomer separation at mg/h throughputs, interest has now turned to scaling up the process for use as a benchtop pharmaceutical production tool. To scale the method, a steady-state mathematical model was developed that predicts the process response to changes in input feed rate and counterflow or "moving bed" velocities. The vortex-stabilized apparatus used for the separation was modeled using four regions based on the different hydrodynamic flows in each section. Concentration profiles were then derived on the basis of the properties of the Piperoxan-sulfated ,-cyclodextrin system being studied. The effects of different regional flow rates on the concentration profiles were evaluated and used to predict the maximum processing rate and the hydrodynamic profiles required for a separation. Although the model was able to qualitatively predict the shapes of the concentration profiles and show where the theoretical limits of operation existed, it was not able to quantitatively match the data from actual enantiomer separations to better than 50% accuracy. This is believed to be due to the simplifying assumptions involved, namely, the neglect of electric field variations and the lack of a competitive binding isotherm in the analysis. Although the model cannot accurately predict concentrations from a separation, it provides a good theoretical framework for analyzing how the process responds to changes in counterflow rate, feed rate, and the properties of the molecules being separated. [source] |