Input Data (input + data)

Distribution by Scientific Domains
Distribution within Engineering

Terms modified by Input Data

  • input data set

  • Selected Abstracts


    Principal Stratification Designs to Estimate Input Data Missing Due to Death

    BIOMETRICS, Issue 3 2007
    Constantine E. Frangakis
    Summary We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure "input" variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure "output" variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable,a treatment or externally controlled variable, which if set at its "effective" level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common. [source]


    Error estimation in a stochastic finite element method in electrokinetics

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2010
    S. Clénet
    Abstract Input data to a numerical model are not necessarily well known. Uncertainties may exist both in material properties and in the geometry of the device. They can be due, for instance, to ageing or imperfections in the manufacturing process. Input data can be modelled as random variables leading to a stochastic model. In electromagnetism, this leads to solution of a stochastic partial differential equation system. The solution can be approximated by a linear combination of basis functions rising from the tensorial product of the basis functions used to discretize the space (nodal shape function for example) and basis functions used to discretize the random dimension (a polynomial chaos expansion for example). Some methods (SSFEM, collocation) have been proposed in the literature to calculate such approximation. The issue is then how to compare the different approaches in an objective way. One solution is to use an appropriate a posteriori numerical error estimator. In this paper, we present an error estimator based on the constitutive relation error in electrokinetics, which allows the calculation of the distance between an average solution and the unknown exact solution. The method of calculation of the error is detailed in this paper from two solutions that satisfy the two equilibrium equations. In an example, we compare two different approximations (Legendre and Hermite polynomial chaos expansions) for the random dimension using the proposed error estimator. In addition, we show how to choose the appropriate order for the polynomial chaos expansion for the proposed error estimator. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Cost-effectiveness of primary cytology and HPV DNA cervical screening

    INTERNATIONAL JOURNAL OF CANCER, Issue 2 2008
    Peter Bistoletti
    Abstract Because cost-effectiveness of different cervical cytology screening strategies with and without human papillomavirus (HPV) DNA testing is unclear, we used a Markov model to estimate life expectancy and health care cost per woman during the remaining lifetime for 4 screening strategies: (i) cervical cytology screening at age 32, 35, 38, 41, 44, 47, 50, 55 and 60, (ii) same strategy with addition of testing for HPV DNA persistence at age 32, (iii) screening with combined cytology and testing for HPV DNA persistence at age 32, 41 and 50, iv) no screening. Input data were derived from population-based screening registries, health-service costs and from a population-based HPV screening trial. Impact of parameter uncertainty was addressed using probabilistic multivariate sensitivity analysis. Cytology screening between 32 and 60 years of age in 3,5 year intervals increased life expectancy and life-time costs were reduced from 533 to 248 US Dollars per woman compared to no screening. Addition of HPV DNA testing, at age 32 increased costs from 248 to 284 US Dollars without benefit on life expectancy. Screening with both cytology and HPV DNA testing, at ages 32, 41 and 50 reduced costs from 248 to 210 US Dollars with slightly increased life expectancy. In conclusion, population-based, organized cervical cytology screening between ages 32 to 60 is highly cost-efficient for cervical cancer prevention. If screening intervals are increased to at least 9 years, combined cytology and HPV DNA screening appeared to be still more effective and less costly. © 2007 Wiley-Liss, Inc. [source]


    Very high resolution interpolated climate surfaces for global land areas

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 15 2005
    Robert J. Hijmans
    Abstract We developed interpolated climate surfaces for global land areas (excluding Antarctica) at a spatial resolution of 30 arc s (often referred to as 1-km spatial resolution). The climate elements considered were monthly precipitation and mean, minimum, and maximum temperature. Input data were gathered from a variety of sources and, where possible, were restricted to records from the 1950,2000 period. We used the thin-plate smoothing spline algorithm implemented in the ANUSPLIN package for interpolation, using latitude, longitude, and elevation as independent variables. We quantified uncertainty arising from the input data and the interpolation by mapping weather station density, elevation bias in the weather stations, and elevation variation within grid cells and through data partitioning and cross validation. Elevation bias tended to be negative (stations lower than expected) at high latitudes but positive in the tropics. Uncertainty is highest in mountainous and in poorly sampled areas. Data partitioning showed high uncertainty of the surfaces on isolated islands, e.g. in the Pacific. Aggregating the elevation and climate data to 10 arc min resolution showed an enormous variation within grid cells, illustrating the value of high-resolution surfaces. A comparison with an existing data set at 10 arc min resolution showed overall agreement, but with significant variation in some regions. A comparison with two high-resolution data sets for the United States also identified areas with large local differences, particularly in mountainous areas. Compared to previous global climatologies, ours has the following advantages: the data are at a higher spatial resolution (400 times greater or more); more weather station records were used; improved elevation data were used; and more information about spatial patterns of uncertainty in the data is available. Owing to the overall low density of available climate stations, our surfaces do not capture of all variation that may occur at a resolution of 1 km, particularly of precipitation in mountainous areas. In future work, such variation might be captured through knowledge-based methods and inclusion of additional co-variates, particularly layers obtained through remote sensing. Copyright © 2005 Royal Meteorological Society. [source]


    A scenario-based stochastic programming model for water supplies from the highland lakes

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2000
    D.W. Watkins Jr
    Abstract A scenario-based, multistage stochastic programming model is developed for the management of the Highland Lakes by the Lower Colorado River Authority (LCRA) in Central Texas. The model explicitly considers two objectives: (1) maximize the expected revenue from the sale of interruptible water while reliably maintaining firm water supply, and (2) maximize recreational benefits. Input data can be represented by a scenario tree, built empirically from a segment of the historical flow record. Thirty-scenario instances of the model are solved using both a primal simplex method and Benders decomposition, and results show that the first-stage (,here and now') decision of how much interruptible water to contract for the coming year is highly dependent on the initial (current) reservoir storage levels. Sensitivity analysis indicates that model results can be improved by using a scenario generation technique which better preserves the serial correlation of flows. Ultimately, it is hoped that use of the model will improve the LCRA's operational practices by helping to identify flexible policies that appropriately hedge against unfavorable inflow scenarios. [source]


    Menstrual age,dependent systematic error in sonographic fetal weight estimation: A mathematical model

    JOURNAL OF CLINICAL ULTRASOUND, Issue 3 2002
    Max Mongelli MD
    Abstract Purpose We used computer modeling techniques to evaluate the accuracy of different types of sonographic formulas for estimating fetal weight across the full range of clinically important menstrual ages. Methods Input data for the computer modeling techniques were derived from published British standards for normal distributions of sonographic biometric growth parameters and their correlation coefficients; these standards had been derived from fetal populations whose ages were determined using sonography. The accuracy of each of 10 formulas for estimating fetal weight was calculated by comparing the weight estimates obtained with these formulas in simulated populations with the weight estimates expected from birth weight data, from 24 weeks' menstrual age to term. Preterm weights were estimated by interpolation from term birth weights using sonographic growth curves. With an ideal formula, the median weight estimates at term should not differ from the population birth weight median. Results The simulated output sonographic values closely matched those of the original population. The accuracy of the fetal weight estimation differed by menstrual age and between various formulas. Most methods tended to overestimate fetal weight at term. Shepard's formula progressively overestimated weights from about 2% at 32 weeks to more than 15% at term. The accuracy of Combs's and Shinozuka's volumetric formulas varied least by menstrual age. Hadlock's formula underestimated preterm fetal weight by up to 7% and overestimated fetal weight at term by up to 5%. Conclusions The accuracy of sonographic fetal weight estimation based on volumetric formulas is more consistent across menstrual ages than are other methods. © 2002 Wiley Periodicals, Inc. J Clin Ultrasound 30:139,144, 2002; DOI 10.1002/jcu.10051 [source]


    Combined compression and simplification of dynamic 3D meshes

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2009
    Libor Vá
    Abstract We present a new approach to dynamic mesh compression, which combines compression with simplification to achieve improved compression results, a natural support for incremental transmission and level of detail. The algorithm allows fast progressive transmission of dynamic 3D content. Our scheme exploits both temporal and spatial coherency of the input data, and is especially efficient for the case of highly detailed dynamic meshes. The algorithm can be seen as an ultimate extension of the clustering and local coordinate frame (LCF)-based approaches, where each vertex is expressed within its own specific coordinate system. The presented results show that we have achieved better compression efficiency compared to the state of the art methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A digital simulation of the vibration of a two-mass two-spring system

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2010
    Wei-Pin Lee
    Abstract In this study, we developed a computer program to simulate the vibration of a two-mass two-spring system by using Visual BASIC. Users can enter data for the two-mass two-spring system. The software will derive the eigenvalue problem from the input data. Then the software solves the eigenvalue problem and illustrates the results numerically and graphically on the screen. In addition, the program uses animation to demonstrate the motions of the two masses. The displacements, velocities, and accelerations of the two bodies can be shown if the corresponding checkboxes are selected. This program can be used in teaching courses, such as Linear Algebra, Advanced Engineering Mathematics, Vibrations, and Dynamics. Use of the software may help students to understand the applications of eigenvalue problems and related topics such as modes of vibration, natural frequencies, and systems of differential equations. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 563,573, 2010; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20241 [source]


    Simulation of compression refrigeration systems

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2006
    Jaime Sieres
    Abstract This study presents the main features of a software for simulating vapor compression refrigeration systems that are self designed by the user. A library of 10 different components is available: compressor, expansion device, condenser, evaporator, heat exchanger, flash tank, direct intercooler flash tank, indirect intercooler flash tank, mixer, and splitter. With these components and a library of different refrigerants many different refrigeration systems may be solved. By a user-friendly interface, the user can draw the system scheme by adding different components, connecting them and entering different input data. Results are presented in the form of tables and the cycle diagram of the system is drawn on the logP,h and T,s thermodynamic charts. © 2006 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 188,197, 2006; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20075 [source]


    BqR-Tree: A Data Structure for Flights and Walkthroughs in Urban Scenes with Mobile Elements

    COMPUTER GRAPHICS FORUM, Issue 6 2010
    J.L. Pina
    I.3.6 [Computer Graphics]: Graphics data structures and data types Abstract BqR-Tree, the data structure presented in this paper is an improved R-Tree data structure based on a quadtree spatial partitioning which improves the rendering speed of the usual R-trees when view-culling is implemented, especially in urban scenes. The city is split by means of a spatial quadtree partition and the block is adopted as the basic urban unit. One advantage of blocks is that they can be easily identified in any urban environment, regardless of the origins and structure of the input data. The aim of the structure is to accelerate the visualization of complex scenes containing not only static but dynamic elements. The usefulness of the structure has been tested with low structured data, which makes its application appropriate to almost all city data. The results of the tests show that when using the BqR-Tree structure to perform walkthroughs and flights, rendering times vastly improve in comparison to the data structures which have yielded best results to date, with average improvements of around 30%. [source]


    Reconstructing head models from photographs for individualized 3D-audio processing

    COMPUTER GRAPHICS FORUM, Issue 7 2008
    M. Dellepiane
    Abstract Visual fidelity and interactivity are the main goals in Computer Graphics research, but recently also audio is assuming an important role. Binaural rendering can provide extremely pleasing and realistic three-dimensional sound, but to achieve best results it's necessary either to measure or to estimate individual Head Related Transfer Function (HRTF). This function is strictly related to the peculiar features of ears and face of the listener. Recent sound scattering simulation techniques can calculate HRTF starting from an accurate 3D model of a human head. Hence, the use of binaural rendering on large scale (i.e. video games, entertainment) could depend on the possibility to produce a sufficiently accurate 3D model of a human head, starting from the smallest possible input. In this paper we present a completely automatic system, which produces a 3D model of a head starting from simple input data (five photos and some key-points indicated by user). The geometry is generated by extracting information from images and accordingly deforming a 3D dummy to reproduce user head features. The system proves to be fast, automatic, robust and reliable: geometric validation and preliminary assessments show that it can be accurate enough for HRTF calculation. [source]


    Rendering: Input and Output

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    H. Rushmeier
    Rendering is the process of creating an image from numerical input data. In the past few years our ideas about methods for acquiring the input data and the form of the output have expanded. The availability of inexpensive cameras and scanners has influenced how we can obtain data needed for rendering. Input for rendering ranges from sets of images to complex geometric descriptions with detailed BRDF data. The images that are rendered may be simply arrays of RGB images, or they may be arrays with vectors or matrices of data defined for each pixel. The rendered images may not be intended for direct display, but may be textures for geometries that are to be transmitted to be rendered on another system. A broader range of parameters now need to be taken into account to render images that are perceptually consistent across displays that range from CAVEs to personal digital assistants. This presentation will give an overview of how new hardware and new applications have changed traditional ideas of rendering input and output. [source]


    An Adaptive Conjugate Gradient Neural Network,Wavelet Model for Traffic Incident Detection

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
    H. Adeli
    Artificial neural networks are known to be effective in solving problems involving pattern recognition and classification. The traffic incident-detection problem can be viewed as recognizing incident patterns from incident-free patterns. A neural network classifier has to be trained first using incident and incident-free traffic data. The dimensionality of the training input data is high, and the embedded incident characteristics are not easily detectable. In this article we present a computational model for automatic traffic incident detection using discrete wavelet transform, linear discriminant analysis, and neural networks. Wavelet transform and linear discriminant analysis are used for feature extraction, denoising, and effective preprocessing of data before an adaptive neural network model is used to make the traffic incident detection. Simulated as well as actual traffic data are used to test the model. For incidents with a duration of more than 5 minutes, the incident-detection model yields a detection rate of nearly 100 percent and a false-alarm rate of about 1 percent for two- or three-lane freeways. [source]


    Interactive editing of digital fault models

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2010
    Jordan Van Aalsburg
    Abstract We describe an application to interactively create and manipulate digital fault maps, either by tracing existing (paper) fault maps created from geological surveys, or by directly observing fault expressions and earthquake hypocenters in remote sensing data such as high-resolution (,100k × 100k elevation postings) digital elevation models with draped color imagery. Such fault maps serve as input data to finite-element-method simulations of fault interactions, and are crucial to understand regional tectonic processes causing earthquakes, and have tentatively been used to forecast future seismic events or to predict the shaking from likely future earthquakes. This fault editor is designed for immersive virtual reality environments such as CAVEs, and presents users with visualizations of scanned 2D fault maps and textured 3D terrain models, and a set of 3D editing tools to create or manipulate faults. We close with a case study performed by one of our geologist co-authors (Yikilmaz), which evaluates the use of our fault editor in creating a detailed digital fault model of the North Anatolian Fault in Turkey, one of the largest, seismically active strike-slip faults in the world. Yikilmaz, who was directly involved in program development, used our fault editor both in a CAVE and on a desktop computer, and compares it to the industry-standard software package ArcGIS. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Performance prediction for a code with data-dependent runtimes

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2008
    S. A. Jarvis
    Abstract In this paper we present a predictive performance model for a key biomedical imaging application found as part of the U.K. e-Science Information eXtraction from Images (IXI) project. This code represents a significant challenge for our existing performance prediction tools as it has internal structures that exhibit highly variable runtimes depending on qualities in the input data provided. Since the runtime can vary by more than an order of magnitude, it has been difficult to apply meaningful quality of service criteria to workflows that use this code. The model developed here is used in the context of an interactive scheduling system which provides rapid feedback to the users, allowing them to tailor their workloads to available resources or to allocate extra resources to scheduled workloads. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    SCALES: a large-scale assessment model of soil erosion hazard in Basse-Normandie (northern-western France)

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 8 2010
    P. Le Gouée
    Abstract The cartography of erosion risk is mainly based on the development of models, which evaluate in a qualitative and quantitative manner the physical reproduction of the erosion processes (CORINE, EHU, INRA). These models are mainly semi-quantitative but can be physically based and spatially distributed (the Pan-European Soil Erosion Risk Assessment, PESERA). They are characterized by their simplicity and their applicability potential at large temporal and spatial scales. In developing our model SCALES (Spatialisation d'éChelle fine de l'ALéa Erosion des Sols/large-scale assessment and mapping model of soil erosion hazard), we had in mind several objectives: (1) to map soil erosion at a regional scale with the guarantee of a large accuracy on the local level, (2) to envisage an applicability of the model in European oceanic areas, (3) to focus the erosion hazard estimation on the level of source areas (on-site erosion), which are the agricultural parcels, (4) to take into account the weight of the temporality of agricultural practices (land-use concept). Because of these objectives, the nature of variables, which characterize the erosion factors and because of its structure, SCALES differs from other models. Tested in Basse-Normandie (Calvados 5500,km2) SCALES reveals a strong predisposition of the study area to the soil erosion which should require to be expressed in a wet year. Apart from an internal validation, we tried an intermediate one by comparing our results with those from INRA and PESERA. It appeared that these models under estimate medium erosion levels and differ in the spatial localization of areas with the highest erosion risks. SCALES underlines here the limitations in the use of pedo-transfer functions and the interpolation of input data with a low resolution. One must not forget however that these models are mainly focused on an interregional comparative approach. Therefore the comparison of SCALES data with those of the INRA and PESERA models cannot result on a convincing validation of our model. For the moment the validation is based on the opinion of local experts, who agree with the qualitative indications delivered by our cartography. An external validation of SCALES is foreseen, which will be based on a thorough inventory of erosion signals in areas with different hazard levels. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Improvement and validation of a snow saltation model using wind tunnel measurements

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 14 2008
    Andrew Clifton
    Abstract A Lagrangian snow saltation model has been extended for application to a wide variety of snow surfaces. Important factors of the saltation process, namely number of entrained particles, ejection angle and speed, have been parameterized from data in the literature. The model can now be run using simple descriptors of weather and snow conditions, such as wind, ambient pressure and temperature, snow particle sizes and surface density. Sensitivity of the total mass flux to the new parameterizations is small. However, the model refinements also allow concentration and mass flux profiles to be calculated, for comparison with measurements. Sensitivity of the profiles to the new parameterizations is considerable. Model results have then been compared with a complete set of drifting snow data from our cold wind tunnel. Simulation mass flux results agree with wind tunnel data to within the bounds of measurement uncertainty. Simulated particle sizes at 50 mm above the surface are generally larger than seen in the tunnel, probably as the model only describes particles in saltation, while additional smaller particles may be present in the wind tunnel at this height because of suspension. However, the smaller particles carry little mass, and so the impact on the mass flux is low. The use of simple input data, and parameterization of the saltation process, allows the model to be used predictively. This could include applications from avalanche warning to glacier mass balance. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Parameter identification of framed structures using an improved finite element model-updating method,Part I: formulation and verification

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2007
    Eunjong Yu
    Abstract In this study, we formulate an improved finite element model-updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model-updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least-squares (BVLS) method, which incorporates the usual upper/lower-bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity-based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced-vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Consistency of dynamic site response at Port Island

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2001
    Laurie G. Baise
    Abstract System identification (SI) methods are used to determine empirical Green's functions (EGF) for soil intervals at the Port Island Site in Kobe, Japan and in shake table model tests performed by the Port and Harbor Research Institute (PHRI) to emulate the site during the 17 January 1995 Hyogo-ken Nanbu earthquake. The model form for the EGFs is a parametric auto-regressive moving average (ARMA) model mapping the ground motions recorded at the base of a soil interval to the top of that interval, hence capturing the effect of the soil on the through-passing wave. The consistency of site response at Port Island before, during, and after the mainshock is examined by application of small motion foreshock EGFs to incoming ground motions over these time intervals. The prediction errors (or misfits) for the foreshocks, the mainshock, and the aftershocks, are assessed to determine the extent of altered soil response as a result of liquefaction of the ground during the mainshock. In addition, the consistency of soil response between field and model test is verified by application of EGFs calculated from the shake table test to the 17 January input data. The prediction error is then used to assess the consistency of behaviour between the two cases. By using EGFs developed for small-amplitude foreshock ground motions, ground motions were predicted for all intervals of the vertical array except those that liquefied with small error. Analysis of the post-liquefied ground conditions implies that the site response gradually returns to a pre-earthquake state. Site behaviour is found to be consistent between foreshocks and the mainshock for the native ground (below 16 m in the field) with a normalized mean square error (NMSE) of 0.080 and a peak ground acceleration (PGA) of 0.5g. When the soil actually liquefies (change of state), recursive models are needed to track the variable soil behaviour for the remainder of the shaking. The recursive models are shown to demonstrate consistency between the shake table tests and the field with a NMSE of 0.102 for the 16 m to surface interval that liquefied. The aftershock ground response was not modelled well with the foreshock EGF immediately after the mainshock (NMSE ranging from 0.37 to 0.92). One month after the mainshock, the prediction error from the foreshock modeled was back to the foreshock error level. Copyright © 2001 John Wiley Sons, Ltd. [source]


    Application of DA-preconditioned FINN for electric power system fault detection

    ELECTRICAL ENGINEERING IN JAPAN, Issue 2 2009
    Tadahiro Itagaki
    Abstract This paper proposes a hybrid method of deterministic annealing (DA) and fuzzy inference neural network (FINN) for electric power system fault detection. It extracts features of input data with two-staged precondition of fast Fourier transform (FFT) and DA. FFT is useful for extracting the features of fault currents while DA plays a key role in classifying input data into clusters in a sense of global classification. FINN is a more accurate estimation model than the conventional artificial neural networks (ANNs). The proposed method is successfully applied to data obtained by the Tokyo Electric Power Company (TEPCO) power simulator. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 166(2): 39, 46, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20497 [source]


    A study on two-stage self-organizing map and its application to clustering problems

    ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2007
    Satoru Kato
    Abstract This paper presents a two-stage self-organizing map algorithm that we call two-stage SOM which combines Kohonen's basic SOM (BSOM) and Aoki's SOM with threshold operation (THSOM). In the first stage of two-stage SOM, we use BSOM algorithm in order to acquire topological structure of input data, and then we apply THSOM algorithm so that inactivated code vectors move to appropriate region reflecting the distribution of the input data. Furthermore, we show that two-stage SOM can be applied to clustering problems. Some experimental results reveal that two-stage SOM is effective for clustering problems in comparison with conventional methods. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 159(1): 46,53, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20268 [source]


    Modeling of Hot Ductility During Solidification of Steel Grades in Continuous Casting , Part I,

    ADVANCED ENGINEERING MATERIALS, Issue 3 2010
    Dieter Senk
    The present paper gives an overview of the simultaneous research work carried out by RWTH Aachen University and ThyssenKrupp Steel Europe AG. With a combination of sophisticated simulation tools and experimental techniques it is possible to predict the relations between temperature distribution in the mould, solidification velocity, chemical steel composition and, furthermore, the mechanical properties of the steel shell. Simulation results as well as experimentally observed microstructure parameters are used as input data for hot tearing criteria. A critical choice of existing hot tearing criteria based on different approaches, like critical strain and critical strain rate, are applied and developed. The new "damage model" is going to replace a basic approach to determine hot cracking susceptibility in a mechanical FEM strand model for continuous slab casting of ThyssenKrupp Steel Europe AG. Critical strains for hot cracking in continuous casting were investigated by in situ tensile tests for four steel grades with carbon contents in the range of 0.036 and 0.76,wt%. Additionally to modeling, fractography of laboratory and industrial samples was carried out by SEM and EPMA and the results are discussed. [source]


    Methodology for Thermomechanical Simulation and Validation of Mechanical Weld-Seam Properties,

    ADVANCED ENGINEERING MATERIALS, Issue 3 2010
    Wolfgans Bleck
    A simulation and validation of the mechanical properties in submerged-arc-weld seams is presented, which combines numerical simulation of the thermal cycle in the weld using the SimWeld software with an annealing and testing procedure. The weld-seam geometry and thermal profile near the weld seam can be computed based on the simulation of an equivalent heat source describing the energy input and distribution in the weld seam. Defined temperature,time cycles are imposed on tensile specimens allowing for annealing experiments with fast cooling rates. The direct evaluation of welded structures and the simple generation of input data for mechanical simulations in FE software packages are possible. [source]


    Modeling the Porosity Formation in Austenitic SGI Castings by Using a Physics-Based Material Model,

    ADVANCED ENGINEERING MATERIALS, Issue 3 2010
    B. Pustal
    Abstract On solidification, microsegregations build up in solid phases due to changes in solid concentrations with temperature. Diffusion, which is a kinetic process, usually reduces the occurrence of microsegregations. This work is aimed at modeling such kinetic effects on the solidification of austenitic cast iron, using a holistic approach. For this purpose, a microsegregation model is developed and validated. Moreover, this model is directly coupled to a commercial process-simulation tool and thermodynamic software. A series of GJSA-XNiCr 20-2 clamp-rings is cast by varying the inoculation state and the number of feeders. The composition of this cast alloy is analyzed and the microstructure characterized to provide input data for the microsegregation model. In order to validate the software, cooling curves are recorded; differential thermal analysis, electron dispersive X-ray analysis and electron probe micro analysis are carried out. Furthermore, the porosity within the casting is analyzed by X-ray. By performing coupled simulations, the different cooling characteristics within the casting lead to pronounced differences in phase fractions and solidification temperatures which are due to dendrite arm coarsening. The hot spot effect below the feeders is assisted by a shift towards lower solidification temperatures over the solidification time. This shift is a result of the local cooling characteristics, which can only be predicted when process simulation is directly coupled with material simulation. The porosity predictions and the porosity analysis exhibit good agreement. A comparison between experimental and virtual cooling curves closes, implying that the novel coupling concept and its implementation are valid. [source]


    Analytical Modelling of the Radiative Properties of Metallic Foams: Contribution of X-Ray Tomography

    ADVANCED ENGINEERING MATERIALS, Issue 4 2008
    M. Loretz
    Two metallic foams exhibiting a similar porosity but different cell sizes have been characterized using X-ray tomography. The images have been processed and analysed to retrieve the morphological properties required for the calculation of the radiative properties such as the extinction coefficient. The multiple possibilities of using the X-ray tomography method rather than conventional optical methods like SEM have been quantified. The extinction coefficient has then been determined from two approaches. First, the resulting morphological properties have been used as the input data of the conventional independent scattering theory. A special emphasis is put on the determination of morphological properties and their influence on the results. In the second approach, an original method is also proposed in order to determine the extinction coefficient of highly porous open cell metal foams, from the tomographic images and without any calculation or hypothesis. Results show a good agreement with the extinction coefficient obtained from experimental measurements. Our novel method enables to reduce uncertainties considerably. [source]


    A spatially explicit, individual-based model to assess the role of estuarine nurseries in the early life history of North Sea herring, Clupea harengus

    FISHERIES OCEANOGRAPHY, Issue 1 2005
    JOACHIM MAES
    Abstract Herring (Clupea harengus) enter and remain within North Sea estuaries during well-defined periods of their early life history. The costs and benefits of the migrations between offshore spawning grounds and upper, low-salinity zones of estuarine nurseries are identified using a dynamic state-variable model, in which the fitness of an individual is maximized by selecting the most profitable habitat. Spatio-temporal gradients in temperature, turbidity, food availability and predation risk simulate the environment. We modeled predation as a function of temperature, the optical properties of the ambient water, the time allocation of feeding and the abundance of whiting (Merlangius merlangus). Growth and metabolic costs were assessed using a bioenergetic model. Model runs using real input data for the Scheldt estuary (Belgium, The Netherlands) and the southern North Sea show that estuarine residence results in fitter individuals through a considerable increase in survival probability of age-0 fish. Young herring pay for their migration into safer estuarine water by foregoing growth opportunities at sea. We suggest that temperature and, in particular, the time lag between estuarine and seawater temperatures, acts as a basic cue for herring to navigate in the heterogeneous space between the offshore spawning grounds at sea and the oligohaline nursery zone in estuaries. [source]


    An individual-based model of the early life history of mackerel (Scomber scombrus) in the eastern North Atlantic, simulating transport, growth and mortality

    FISHERIES OCEANOGRAPHY, Issue 6 2004
    J. Bartsch
    Abstract The main purpose of this paper is to provide the core description of the modelling exercise within the Shelf Edge Advection Mortality And Recruitment (SEAMAR) programme. An individual-based model (IBM) was developed for the prediction of year-to-year survival of the early life-history stages of mackerel (Scomber scombrus) in the eastern North Atlantic. The IBM is one of two components of the model system. The first component is a circulation model to provide physical input data for the IBM. The circulation model is a geographical variant of the HAMburg Shelf Ocean Model (HAMSOM). The second component is the IBM, which is an i-space configuration model in which large numbers of individuals are followed as discrete entities to simulate the transport, growth and mortality of mackerel eggs, larvae and post-larvae. Larval and post-larval growth is modelled as a function of length, temperature and food distribution; mortality is modelled as a function of length and absolute growth rate. Each particle is considered as a super-individual representing 106 eggs at the outset of the simulation, and then declining according to the mortality function. Simulations were carried out for the years 1998,2000. Results showed concentrations of particles at Porcupine Bank and the adjacent Irish shelf, along the Celtic Sea shelf-edge, and in the southern Bay of Biscay. High survival was observed only at Porcupine and the adjacent shelf areas, and, more patchily, around the coastal margin of Biscay. The low survival along the shelf-edge of the Celtic Sea was due to the consistently low estimates of food availability in that area. [source]


    A covariance-adaptive approach for regularized inversion in linear models

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
    Christopher Kotsakis
    SUMMARY The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov,Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of ,CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design. [source]


    Studies on ,precarious rocks' in the epicentral area of the AD 1356 Basle earthquake, Switzerland

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2005
    Peter Schürch
    SUMMARY For the first time precarious rocks have been analysed in the epicentral area of the AD 1356 Basle earthquake in northern Switzerland. Several cliff sites in flat-lying, thickly bedded Upper Jurassic coral limestones in the Jura Mountains were investigated. Seven blocks are regarded as precarious with respect to earthquake strong ground motions. The age of these precarious rocks could not be determined directly as for instance by radiometric dating methods; however, based on slope degradation processes it can be concluded that the formation of these blocks predates the AD 1356 Basle earthquake. The acceleration required to topple a precarious rock from its pedestal is estimated using geometrical data for individual block sections and earthquake strong-motion records from stations on rock sites in the European Strong-Motion Database as input data for the computer program ROCKING V1.0 from the Seismological Laboratory, University of Nevada, Reno. The calculations indicate that toppling of a precarious rock largely depends on earthquake strength but also on the frequency spectrum of the signal. Although most investigated precarious rocks are surprisingly stable for ground motions similar to those expected to have occurred during the AD 1356 Basle earthquake, at least two blocks are clearly precariously balanced, with peak toppling accelerations lower than 0.3 g. Possible reasons why these blocks did not topple during the AD 1356 Basle earthquake include incomplete separation from their base, sliding of precarious rocks, their size, lower than assumed ground accelerations and/or duration of shaking. [source]


    Adaptive subtraction of multiples using the L1 -norm

    GEOPHYSICAL PROSPECTING, Issue 1 2004
    A. Guitton
    ABSTRACT A strategy for multiple removal consists of estimating a model of the multiples and then adaptively subtracting this model from the data by estimating shaping filters. A possible and efficient way of computing these filters is by minimizing the difference or misfit between the input data and the filtered multiples in a least-squares sense. Therefore, the signal is assumed to have minimum energy and to be orthogonal to the noise. Some problems arise when these conditions are not met. For instance, for strong primaries with weak multiples, we might fit the multiple model to the signal (primaries) and not to the noise (multiples). Consequently, when the signal does not exhibit minimum energy, we propose using the L1 -norm, as opposed to the L2 -norm, for the filter estimation step. This choice comes from the well-known fact that the L1 -norm is robust to ,large' amplitude differences when measuring data misfit. The L1 -norm is approximated by a hybrid L1/L2 -norm minimized with an iteratively reweighted least-squares (IRLS) method. The hybrid norm is obtained by applying a simple weight to the data residual. This technique is an excellent approximation to the L1 -norm. We illustrate our method with synthetic and field data where internal multiples are attenuated. We show that the L1 -norm leads to much improved attenuation of the multiples when the minimum energy assumption is violated. In particular, the multiple model is fitted to the multiples in the data only, while preserving the primaries. [source]