Input Variables (input + variable)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


Predicting summer rainfall in the Yangtze River basin with neural networks

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2008
Heike Hartmann
Abstract Summer rainfall in the Yangtze River basin is predicted using neural network techniques. Input variables (predictors) for the neural network are the Southern Oscillation Index (SOI), the East Atlantic/Western Russia (EA/WR) pattern, the Scandinavia (SCA) pattern, the Polar/Eurasia (POL) pattern and several indices calculated from sea surface temperatures (SST), sea level pressures (SLP) and snow data from December to April for the period from 1993 to 2002. The output variable of the neural network is rainfall from May to September for the period from 1994 to 2002, which was previously classified into six different regions by means of a principal component analysis (PCA). Rainfall is predicted from May to September 2002. The winter SST and SLP indices are identified to be the most important predictors of summer rainfall in the Yangtze River basin. The Tibetan Plateau snow depth, the SOI and the other teleconnection indices seem to be of minor importance for an accurate prediction. This may be the result of the length of the available time series, which does not allow a deeper analysis of the impact of multi-annual oscillations. The neural network algorithms proved to be capable of explaining most of the rainfall variability in the Yangtze River basin. For five out of six regions, our predictions explain at least 77% of the total variance of the measured rainfall. Copyright © 2007 Royal Meteorological Society [source]


Assessment of salmon stocks and the use of management targets; a case study of the River Tamar, England

FISHERIES MANAGEMENT & ECOLOGY, Issue 1 2007
K. HENDRY
Abstract, Over recent years the rod and net catch of Atlantic salmon, Salmo salar L., on the River Tamar in south-west England has decreased markedly, resulting in a consistent failure to meet the minimum egg deposition target (conservation limit). Compliance with the target is by annual assessment using rod catch as the major input variable. Further analysis suggested a disproportionate deterioration in the rod fishery performance of the Tamar compared with rivers locally, regionally and nationally. A concomitant decrease in rod licence sales and fishing effort, above both national and regional trends was also evident. However, examination of juvenile electric fishing and adult fish counter data revealed a different trend for the past 10 years, indicating a stable fish population, albeit at a lower level of abundance than previously. The analyses suggested that without consideration of changes in effort and rod exploitation rate, rod catch alone is not a reliable indicator of stock abundance and hence should not be used as such in stock assessment. [source]


Experimental design comparison of studies evaluating doxorubicin nanoparticles in breast cancer therapy

HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 3 2008
Farman A. Moayed
Background The unique properties of nanoparticles (NP) qualify these colloidal systems for a wide range of medical applications, including diagnosis and treatment. Particularly in cancer therapy, NP have significantly enhanced the potential of conventional imaging, radiotherapy, and chemotherapy and, consequently, offered new avenues for early interventions. So far, breast cancer has been one of the most studied cancer types with NP research, which can benefit the occupational breast cancer for the increasing number of women in the labor force in industry. Objectives The objective of this study is to compare the experimental designs of preclinical studies that assessed the effect of doxorubicin NP (DOX-NP) on the estrogen-dependent MCF-7 breast cancer cell line using a recently established quantitative Experimental Appraisal Instrument (ExpAI). Methods A systematic review of research articles published between August 2004 and August 2005 on NP and breast cancer treatment with doxorubicin was performed using various online databases and indexes available through the University of Cincinnati. Restrictive inclusion and exclusion criteria were defined leading to selection of four relevant articles that used comparable experimental designs. Critical appraisal of those studies was performed by five independent assessors using the ExpAI version 2.0 and the results were summarized in a table of evidence. Results The study design in the selected articles was either between groups or mixed, with sample sizes varying from n = 3,6, and the evaluation of the effect of DOX-NP either in vitro or in vivo. The cytotoxic drug doxorubicin was the input variable in all studies, whereas different end points such as pharmacokinetic parameters, cytotoxicity surrogates (e.g., growth inhibition, mitochondrial activity), and quantitative analysis of messenger RNA were used as output variables. Conclusions Although the articles assessed in this article were preclinical experimental studies, the results showed that doxorubicin NP drugs can be used effectively to enhance the delivery process in MCF-7 breast cancer cells by increasing the circulation time and targeting the tumor tissues. Considering the rising number of women in the labor force and the risk of occupational breast cancer, it can be concluded that DOX-NP may potentially be used as an effective anticancer drug on humans, but further research and studies are required to understand how DOX-NP drugs might react in the human body before using it on breast cancer patients. © 2008 Wiley Periodicals, Inc. [source]


Support vector machines-based modelling of seismic liquefaction potential

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2006
Mahesh Pal
Abstract This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A hybrid neural network for input that is both categorical and quantitative

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 10 2004
Roelof K. Brouwer
The data on which a MLP (multilayer perceptron) is normally trained to approximate a continuous function may include inputs that are categorical in addition to the numeric or quantitative inputs. Examples of categorical variables are gender, race, and so on. An approach examined in this article is to train a hybrid network consisting of a MLP and an encoder with multiple output units; that is, a separate output unit for each of the various combinations of values of the categorical variables. Input to the feed forward subnetwork of the hybrid network is then restricted to truly numerical quantities. A MLP with connection matrices that multiply input values and sigmoid functions that further transform values represents a continuous mapping in all input variables. A MLP therefore requires that all inputs correspond to numeric, continuously valued variables and represents a continuous function in all input variables. A categorical variable, on the other hand, produces a discontinuous relationship between an input variable and the output. The way that this problem is often dealt with is to replace the categorical values by numeric ones and treat them as if they were continuously valued. However there is no meaningful correspondence between the continuous quantities generated this way and the original categorical values. The basic difficulty with using these variables is that they define a metric for the categories that may not be reasonable. This suggests that the categorical inputs should be segregated from the continuous inputs as explained above. Results show that the method utilizing a hybrid network and separating numerical from quantitative input, as discussed here, is quite effective. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 979,1001, 2004. [source]


Monitoring cascade processes using VSI EWMA control charts

JOURNAL OF CHEMOMETRICS, Issue 9 2009
Su-Fen Yang
Abstract The paper considers the variables process control scheme for cascade process. We construct variable sampling interval (VSI) EWMA and EWMA control charts to effectively monitor the input variable and the output variable produced by a cascade process. Numerical analysis results demonstrate that the performance of the VSI control charts is much better than the fixed sampling interval (FSI) control charts in detecting small and median shifts. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A ground-level ozone forecasting model for Santiago, Chile

JOURNAL OF FORECASTING, Issue 6 2002
Héctor Jorquera
Abstract A physically based model for ground-level ozone forecasting is evaluated for Santiago, Chile. The model predicts the daily peak ozone concentration, with the daily rise of air temperature as input variable; weekends and rainy days appear as interventions. This model was used to analyse historical data, using the Linear Transfer Function/Finite Impulse Response (LTF/FIR) formalism; the Simultaneous Transfer Function (STF) method was used to analyse several monitoring stations together. Model evaluation showed a good forecasting performance across stations,for low and high ozone impacts,with power of detection (POD) values between 70 and 100%, Heidke's Skill Scores between 40% and 70% and low false alarm rates (FAR). The model consistently outperforms a pure persistence forecast. Model performance was not sensitive to different implementation options. The model performance degrades for two- and three-days ahead forecast, but is still acceptable for the purpose of developing an environmental warning system at Santiago. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Team conflict management and team effectiveness: the effects of task interdependence and team identification

JOURNAL OF ORGANIZATIONAL BEHAVIOR, Issue 3 2009
Anit Somech
The present study explores the dynamics of conflict management as a team phenomenon. The study examines how the input variable of task structure (task interdependence) is related to team conflict management style (cooperative versus competitive) and to team performance, and how team identity moderates these relationships. Seventy-seven intact work teams from high-technology companies participated in the study. Results revealed that at high levels of team identity, task interdependence was positively associated with the cooperative style of conflict management, which in turn fostered team performance. Although a negative association was found between competitive style and team performance, this style of team conflict management did not mediate between the interactive effect of task interdependence and team identity on team performance. Copyright © 2008 John Wiley & Sons, Ltd. [source]


PREDICTION OF LOCAL SCOUR AROUND BRIDGE PIERS USING ARTIFICIAL NEURAL NETWORKS,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 2 2006
Sung-Uk Choi
ABSTRACT: This paper describes a method for predicting local scour around bridge piers using an artificial neural network (ANN). Methods for selecting input variables, calibrations of network control parameters, learning process, and verifications are also discussed. The ANN model trained by laboratory data is applied to both laboratory and field measurements. The results illustrate that the ANN model can be used to predict local scour in the laboratories and in the field better than other empirical relationships that are currently in use. A parameter study is also carried out to investigate the importance of each input variable as reflected in data. [source]


Probabilistic Neural Network for Reliability Assessment of Oil and Gas Pipelines

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2002
Sunil K. Sinha
A fuzzy artificial neural network (ANN),based approach is proposed for reliability assessment of oil and gas pipelines. The proposed ANN model is trained with field observation data collected using magnetic flux leakage (MFL) tools to characterize the actual condition of aging pipelines vulnerable to metal loss corrosion. The objective of this paper is to develop a simulation-based probabilistic neural network model to estimate the probability of failure of aging pipelines vulnerable to corrosion. The approach is to transform a simulation-based probabilistic analysis framework to estimate the pipeline reliability into an adaptable connectionist representation, using supervised training to initialize the weights so that the adaptable neural network predicts the probability of failure for oil and gas pipelines. This ANN model uses eight pipe parameters as input variables. The output variable is the probability of failure. The proposed method is generic, and it can be applied to several decision problems related with the maintenance of aging engineering systems. [source]


Assessment of shallow landslide susceptibility by means of multivariate statistical techniques

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 12 2001
Cristina Baeza
Abstract Several multivariate statistical analyses have been performed to identify the most influential geological and geomorphological parameters on shallow landsliding and to quantify their relative contribution. A data set was first prepared including more than 30 attributes of 230 failed and unfailed slopes. The performance of principal component analysis, t-test and one-way test, allowed a preliminary selection of the most significant variables, which were used as input variables for the discriminant analysis. The function obtained has classified successfully 88·5 per cent of the overall slope population and 95·6 per cent of the failed slopes. Slope gradient, watershed area and land-use appeared as the most powerful discriminant factors. A landslide susceptibility map, based on the scores of the discriminant function, has been prepared for Ensija range in the Eastern Pyrenees. An index of relative landslide density shows that the results of the map are consistent. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Data Mining for Bioprocess Optimization

ENGINEERING IN LIFE SCIENCES (ELECTRONIC), Issue 3 2004
S. Rommel
Abstract Although developed for completely different applications, the great technological potential of data analysis methods called "data mining" has increasingly been realized as a method for efficiently analyzing potentials for optimization and for troubleshooting within many application areas of process, technology. This paper presents the successful application of data mining methods for the optimization of a fermentation process, and discusses diverse characteristics of data mining for biological processes. For the optimization of biological processes a huge amount of possibly relevant process parameters exist. Those input variables can be parameters from devices as well as process control parameters. The main challenge of such optimizations is to robustly identify relevant combinations of parameters among a huge amount of process parameters. For the underlying process we found with the application of data mining methods, that the moment a special carbohydrate component is added has a strong impact on the formation of secondary components. The yield could also be increased by using 2 m3 fermentors instead of 1 m3 fermentors. [source]


Prediction of municipal solid waste generation with combination of support vector machine and principal component analysis: A case study of Mashhad

ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 2 2009
R. Noori
Abstract Quantity prediction of municipal solid waste (MSW) is crucial for design and programming municipal solid waste management system (MSWMS). Because effect of various parameters on MSW quantity and its high fluctuation, prediction of generated MSW is a difficult task that can lead to enormous error. The works presented here involve developing an improved support vector machine (SVM) model, which combines the principal component analysis (PCA) technique with the SVM to forecast the weekly generated waste of Mashhad city. In this study, the PCA technique was first used to reduce and orthogonalize the original input variables (data). Then these treated data were used as new input variables in SVM model. This improved model was evaluated by using weekly time series of waste generation (WG) and the number of trucks that carry waste in week of t. These data have been collected from 2005 to 2008. By comparing the predicted WG with the observed data, the effectiveness of the proposed model was verified. Therefore, in authors' opinion, the model presented in this article is a potential tool for predicting WG and has advantages over the traditional SVM model. © 2008 American Institute of Chemical Engineers Environ Prog, 2009 [source]


Evaluating and expressing the propagation of uncertainty in chemical fate and bioaccumulation models

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2002
Matthew MacLeod
Abstract First-order analytical sensitivity and uncertainty analysis for environmental chemical fate models is described and applied to a regional contaminant fate model and a food web bioaccumulation model. By assuming linear relationships between inputs and outputs, independence, and log-normal distributions of input variables, a relationship between uncertainty in input parameters and uncertainty in output parameters can be derived, yielding results that are consistent with a Monte Carlo analysis with similar input assumptions. A graphical technique is devised for interpreting and communicating uncertainty propagation as a function of variance in input parameters and model sensitivity. The suggested approach is less calculationally intensive than Monte Carlo analysis and is appropriate for preliminary assessment of uncertainty when models are applied to generic environments or to large geographic areas or when detailed parameterization of input uncertainties is unwarranted or impossible. This approach is particularly useful as a starting point for identification of sensitive model inputs at the early stages of applying a generic contaminant fate model to a specific environmental scenario, as a tool to support refinements of the model and the uncertainty analysis for site-specific scenarios, or for examining defined end points. The analysis identifies those input parameters that contribute significantly to uncertainty in outputs, enabling attention to be focused on defining median values and more appropriate distributions to describe these variables. [source]


Pedotransfer functions for solute transport parameters of Portuguese soils

EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 4 2001
M. C. Gonc, alves
Summary The purpose of this study is to quantify solute transport parameters of fine-textured soils in an irrigation district in southern Portugal and to investigate their prediction from basic soil properties and unsaturated hydraulic parameters. Solute displacement experiments were carried out on 24 undisturbed soil samples by applying a 0.05 m KCl pulse during steady flow. The chloride breakthrough curves (BTCs) were asymmetric, with early breakthrough and considerable tailing characteristic of non-equilibrium transport. The retardation factor (R), dispersion coefficient (D), partitioning coefficient (,), and mass transfer coefficient (,) were estimated by optimizing the solution of the non-equilibrium convection,dispersion equation (CDE) to the breakthrough data. The solution could adequately describe the observed data as proved by a median of 0.972 for the coefficient of determination (r2) and a median for the mean squared error (MSE) of 5.1 × 10,6. The median value for R of 0.587 suggests that Cl, was excluded from a substantial part of the liquid phase. The value for , was typically less than 0.5, but the non-equilibrium effects were mitigated by a large mass transfer coefficient (, > 1). Pedotransfer functions (PTFs) were developed with regression and neural network analyses to predict R, D, , and , from basic soil properties and unsaturated hydraulic parameters. Fairly accurate predictions could be obtained for logD (r2 , 0.9) and , (r2 , 0.8). Prediction for R and log, were relatively poor (r2 , 0.5). The artificial neural networks were all somewhat more accurate than the regression equations. The networks are also more suitable for predicting transport parameters because they require only three input variables, whereas the regression equations contain many predictor variables. [source]


Winter diatom blooms in a regulated river in South Korea: explanations based on evolutionary computation

FRESHWATER BIOLOGY, Issue 10 2007
DONG-KYUN KIM
Summary 1. An ecological model was developed using genetic programming (GP) to predict the time-series dynamics of the diatom, Stephanodiscus hantzschii for the lower Nakdong River, South Korea. Eight years of weekly data showed the river to be hypertrophic (chl. a, 45.1 ± 4.19 ,g L,1, mean ± SE, n = 427), and S. hantzschii annually formed blooms during the winter to spring flow period (late November to March). 2. A simple non-linear equation was created to produce a 3-day sequential forecast of the species biovolume, by means of time series optimization genetic programming (TSOGP). Training data were used in conjunction with a GP algorithm utilizing 7 years of limnological variables (1995,2001). The model was validated by comparing its output with measurements for a specific year with severe blooms (1994). The model accurately predicted timing of the blooms although it slightly underestimated biovolume (training r2 = 0.70, test r2 = 0.78). The model consisted of the following variables: dam discharge and storage, water temperature, Secchi transparency, dissolved oxygen (DO), pH, evaporation and silica concentration. 3. The application of a five-way cross-validation test suggested that GP was capable of developing models whose input variables were similar, although the data are randomly used for training. The similarity of input variable selection was approximately 51% between the best model and the top 20 candidate models out of 150 in total (based on both Root Mean Squared Error and the determination coefficients for the test data). 4. Genetic programming was able to determine the ecological importance of different environmental variables affecting the diatoms. A series of sensitivity analyses showed that water temperature was the most sensitive parameter. In addition, the optimal equation was sensitive to DO, Secchi transparency, dam discharge and silica concentration. The analyses thus identified likely causes of the proliferation of diatoms in ,river-reservoir hybrids' (i.e. rivers which have the characteristics of a reservoir during the dry season). This result provides specific information about the bloom of S. hantzschii in river systems, as well as the applicability of inductive methods, such as evolutionary computation to river-reservoir hybrid systems. [source]


An empirical model of carbon fluxes in Russian tundra

GLOBAL CHANGE BIOLOGY, Issue 2 2001
Dmitri G. Zamolodchikov
Summary This study presents an empirical model based on a GIS approach, which was constructed to estimate the large-scale carbon fluxes over the entire Russian tundra zone. The model has four main blocks: (i) the computer map of tundra landscapes; (ii) data base of long-term weather records; (iii) the submodel of phytomass seasonal dynamics; and (iv) the submodel of carbon fluxes. The model uses exclusively original in situ diurnal CO2 flux chamber measurements (423 sample plots) conducted during six field seasons (1993,98). The research sites represent the main tundra biome landscapes (arctic, typical, south shrub and mountain tundras) in the latitudinal diapason of 65,74°N and longitudinal profile of 63°E,172°W. The greatest possible diversity of major ecosystem types within the different landscapes was investigated. The majority of the phytomass data used was obtained from the same sample plots. The submodel of carbon fluxes has two dependent [GPP, Gross Respiration (GR)] and several input variables (air temperature, PAR, aboveground phytomass components). The model demonstrates a good correspondence with other independent regional and biome estimates and carbon flux seasonal patterns. The annual GPP of Russian tundra zone for the area of 235 × 106 ha was estimated as ,485.8 ± 34.6 × 106 tC, GR as +474.2 ± 35.0 × 106 tC, and NF as ,11.6 ± 40.8 × 106 tC, which possibly corresponds to an equilibrium state of carbon balance during the climatic period studied (the first half of the 20th century). The results advocate that simple regression-based models are useful for extrapolating carbon fluxes from small to large spatial scales. [source]


A National Study of Efficiency for Dialysis Centers: An Examination of Market Competition and Facility Characteristics for Production of Multiple Dialysis Outputs

HEALTH SERVICES RESEARCH, Issue 3 2002
Hacer Ozgen
Objective. To examine market competition and facility characteristics that can be related to technical efficiency in the production of multiple dialysis outputs from the perspective of the industrial organization model. Study Setting. Freestanding dialysis facilities that operated in 1997 submitted cost report forms to the Health Care Financing Administration (HCFA), and offered all three outputs,outpatient dialysis, dialysis training, and home program dialysis. Data Sources. The Independent Renal Facility Cost Report Data file (IRFCRD) from HCFA was utilized to obtain information on output and input variables and market and facility features for 791 multiple-output facilities. Information regarding population characteristics was obtained from the Area Resources File. Study Design. Cross-sectional data for the year 1997 were utilized to obtain facility-specific technical efficiency scores estimated through Data Envelopment Analysis (DEA). A binary variable of efficiency status was then regressed against its market and facility characteristics and control factors in a multivariate logistic regression analysis. Principal Findings. The majority of the facilities in the sample are functioning technically inefficiently. Neither the intensity of market competition nor a policy of dialyzer reuse has a significant effect on the facilities' efficiency. Technical efficiency is significantly associated, however, with type of ownership, with the interaction between the market concentration of for-profits and ownership type, and with affiliations with chains of different sizes. Nonprofit and government-owned facilities are more likely than their for-profit counterparts to become inefficient producers of renal dialysis outputs. On the other hand, that relationship between ownership form and efficiency is reversed as the market concentration of for-profits in a given market increases. Facilities that are members of large chains are more likely to be technically inefficient. Conclusions. Facilities do not appear to benefit from joint production of a variety of dialysis outputs, which may explain the ongoing tendency toward single-output production. Ownership form does make a positive difference in production efficiency, but only in local markets where competition exists between nonprofit and for-profit facilities. The increasing inefficiency associated with membership in large chains suggests that the growing consolidation in the dialysis industry may not, in fact, be the strategy for attaining more technical efficiency in the production of multiple dialysis outputs. [source]


Using feedforward neural networks and forward selection of input variables for an ergonomics data classification problem

HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 1 2004
Chuen-Lung Chen
A method was developed to accurately predict the risk of injuries in industrial jobs based on datasets not meeting the assumptions of parametric statistical tools, or being incomplete. Previous research used a backward-elimination process for feedforward neural network (FNN) input variable selection. Simulated annealing (SA) was used as a local search method in conjunction with a conjugate-gradient algorithm to develop an FNN. This article presents an incremental step in the use of FNNs for ergonomics analyses, specifically the use of forward selection of input variables. Advantages to this approach include enhancing the effectiveness of the use of neural networks when observations are missing from ergonomics datasets, and preventing overspecification or overfitting of an FNN to training data. Classification performance across two methods involving the use of SA combined with either forward selection or backward elimination of input variables was comparable for complete datasets, and the forward-selection approach produced results superior to previously used methods of FNN development, including the error back-propagation algorithm, when dealing with incomplete data. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 31,49, 2004. [source]


Long-term investigations of the snow cover in a subalpine semi-forested catchment

HYDROLOGICAL PROCESSES, Issue 2 2006
Manfred Stähli
Abstract To improve spring runoff forecasts from subalpine catchments, detailed spatial simulations of the snow cover in this landscape is obligatory. For more than 30 years, the Swiss Federal Research Institute WSL has been conducting extensive snow cover observations in the subalpine watershed Alptal (central Switzerland). This paper summarizes the conclusions from past snow studies in the Alptal valley and presents an analysis of 14 snow courses located at different exposures and altitudes, partly in open areas and partly in forest. The long-term performance of a physically based numerical snow,vegetation,atmosphere model (COUP) was tested with these snow-course measurements. One single parameter set with meteorological input variables corrected to the prevailing local conditions resulted in a convincing snow water equivalent (SWE) simulation at most sites and for various winters with a wide range of snow conditions. The snow interception approach used in this study was able to explain the forest effect on the SWE as observed on paired snow courses. Finally, we demonstrated for a meadow and a forest site that a successful simulation of the snowpack yields appropriate melt rates. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A robust design method using variable transformation and Gauss,Hermite integration

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2006
Beiqing Huang
Abstract Robust design seeks an optimal solution where the design objective is insensitive to the variations of input variables while the design feasibility under the variations is maintained. Accurate robustness assessment for both design objective and feasibility usually requires an intensive computational effort. In this paper, an accurate robustness assessment method with a moderate computational effort is proposed. The numerical Gauss,Hermite integration technique is employed to calculate the mean and standard deviation of the objective and constraint functions. To effectively use the Gauss,Hermite integration technique, a transformation from a general random variable into a normal variable is performed. The Gauss,Hermite integration and the transformation result in concise formulas and produce an accurate approximation to the mean and standard deviation. This approach is then incorporated into the framework of robust design optimization. The design of a two-bar truss and an automobile torque arm is used to demonstrate the effectiveness of the proposed method. The results are compared with the commonly used Taylor expansion method and Monte Carlo simulation in terms of accuracy and efficiency. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Synthesis of multiport resistors with piecewise-linear characteristics: a mixed-signal architecture

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2005
Mauro Parodi
Abstract Non-linear multiport resistors are the main ingredients in the synthesis of non-linear circuits. Recently, a particular PWL representation has been proposed as a generic design platform (IEEE Trans. Circuits Syst.-I 2002; 49:1138,1149). In this paper, we present a mixed-signal circuit architecture, based on standard modules, that allows the electronic integration of non-linear multiport resistors using the mentioned PWL structure. The proposed architecture is fully programmable so that the unit can implement any user-defined non-linearity. Moreover, it is modular: an increment in the number of input variables can be accommodated through the addition of an equal number of input modules. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Creep-recovery parameters of gluten-free batter and crumb properties of bread prepared from pregelatinised cassava starch, sorghum and selected proteins

INTERNATIONAL JOURNAL OF FOOD SCIENCE & TECHNOLOGY, Issue 12 2009
Calvin Onyango
Summary The effect of egg white, skim milk powder, soy protein isolate and soy protein concentrate on creep-recovery parameters of gluten-free batter made from sorghum and pregelatinised cassava starch was studied. Batter treated with egg white had the highest deformation and compliance parameters and lowest zero shear viscosities and differed significantly (P < 0.05) from the other treatments. However, this batter recovered its elasticity sufficiently and its elastic portion of maximum creep compliance did not differ significantly (P < 0.05) from the other treatments. Unlike the other treatments, egg white did not decrease bread volume and exhibited the lowest crumb firmness and staling rate. Optimisation of the amount of egg white with diacetyl tartaric acid esters of mono and diglycerides (DATEM) showed that creep-recovery parameters and crumb hardness were affected by the linear, quadratic and interaction effects of the input variables. Treatment with 6% and 0.1% w/w fwb egg white and DATEM, respectively, gave gluten-free batter with the least elastic portion of maximum creep compliance (Je/Jmax = 11.65%) which corresponded to the lowest crumb firmness (790.8 g). [source]


A hybrid neural network for input that is both categorical and quantitative

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 10 2004
Roelof K. Brouwer
The data on which a MLP (multilayer perceptron) is normally trained to approximate a continuous function may include inputs that are categorical in addition to the numeric or quantitative inputs. Examples of categorical variables are gender, race, and so on. An approach examined in this article is to train a hybrid network consisting of a MLP and an encoder with multiple output units; that is, a separate output unit for each of the various combinations of values of the categorical variables. Input to the feed forward subnetwork of the hybrid network is then restricted to truly numerical quantities. A MLP with connection matrices that multiply input values and sigmoid functions that further transform values represents a continuous mapping in all input variables. A MLP therefore requires that all inputs correspond to numeric, continuously valued variables and represents a continuous function in all input variables. A categorical variable, on the other hand, produces a discontinuous relationship between an input variable and the output. The way that this problem is often dealt with is to replace the categorical values by numeric ones and treat them as if they were continuously valued. However there is no meaningful correspondence between the continuous quantities generated this way and the original categorical values. The basic difficulty with using these variables is that they define a metric for the categories that may not be reasonable. This suggests that the categorical inputs should be segregated from the continuous inputs as explained above. Results show that the method utilizing a hybrid network and separating numerical from quantitative input, as discussed here, is quite effective. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 979,1001, 2004. [source]


A structure identification method of submodels for hierarchical fuzzy modeling using the multiple objective genetic algorithm

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2002
Kanta Tachibana
Fuzzy models describe nonlinear input-output relationships with linguistic fuzzy rules. A hierarchical fuzzy modeling is promising for identification of fuzzy models of target systems that have many input variables. In the identification, (1) determination of a hierarchical structure of submodels, (2) selection of input variables of each submodel, (3) division of input and output space, (4) tuning of membership functions, and (5) determination of fuzzy inference method are carried out. This article presents a hierarchical fuzzy modeling method with an uneven division method of input space of each submodel. For selecting input variables of submodels, the multiple objective genetic algorithm (MOGA) is utilized. MOGA finds multiple models with different input variables and different numbers of fuzzy rules as compromising solutions. A human designer can choose desirable ones from these candidates. The proposed method is applied to acquisition of fuzzy rules from cyclists' pedaling data. In spite of a small number of data, the obtained model was able to give detailed suggestions to each cyclist. © 2002 Wiley Periodicals, Inc. [source]


Designs and analyses of various fuzzy controllers with region-wise linear PID subcontrollers

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 12 2001
C. W. Tao
To reduce the complexity of PID-like fuzzy controllers with three inputs, designs and analyses of various fuzzy controllers with region-wise linear PID subcontrollers are presented in this paper. The proposed region-wise linear PID subcontrollers are composed of one dimensional fuzzy mechanism. And the triangular-type membership functions are adopted for the input variables of the fuzzy controllers. All the possible structures of fuzzy controllers with region-wise linear PID subcontrollers are discussed. According to the number of one-dimensional fuzzy mechanisms included in the structure of the fuzzy controllers, the fuzzy controllers are classified into three main categories. An algorithm is provided to construct effective fuzzy controllers with lowest complexity among all the possible structures. Also, the properties of various designs of fuzzy controllers with region-wise linear PID subcontrollers are compared. The simulation results are included to demonstrate the performances of three basic types of proposed fuzzy controllers with the linear, nonlinear, and delayed plants. © 2001 John Wiley & Sons, Inc. [source]


An operational model predicting autumn bird migration intensities for flight safety

JOURNAL OF APPLIED ECOLOGY, Issue 4 2007
J. VAN BELLE
Summary 1Forecasting migration intensity can improve flight safety and reduce the operational costs of collisions between aircraft and migrating birds. This is particularly true for military training flights, which can be rescheduled if necessary and often take place at low altitudes and during the night. Migration intensity depends strongly on weather conditions but reported effects of weather differ among studies. It is therefore unclear to what extent existing predictive models can be extrapolated to new situations. 2We used radar measurements of bird densities in the Netherlands to analyse the relationship between weather and nocturnal migration. Using our data, we tested the performance of three regression models that have been developed for other locations in Europe. We developed and validated new models for different combinations of years to test whether regression models can be used to predict migration intensity in independent years. Model performance was assessed by comparing model predictions against benchmark predictions based on measured migration intensity of the previous night and predictions based on a 6-year average trend. We also investigated the effect of the size of the calibration data set on model robustness. 3All models performed better than the benchmarks, but the mismatch between measurements and predictions was large for existing models. Model performance was best for newly developed regression models. The performance of all models was best at intermediate migration intensities. The performance of our models clearly increased with sample size, up to about 90 nocturnal migration measurements. Significant input variables included seasonal migration trend, wind profit, 24-h trend in barometric pressure and rain. 4Synthesis and applications. Migration intensities can be forecast with a regression model based on meteorological data. This and other existing models are only valid locally and cannot be extrapolated to new locations. Model development for new locations requires data sets with representative inter- and intraseasonal variability so that cross-validation can be applied effectively. The Royal Netherlands Air Force currently uses the regression model developed in this study to predict migration intensities 3 days ahead. This improves the reliability of migration intensity warnings and allows rescheduling of training flights if needed. [source]


Model-based uncertainty in species range prediction

JOURNAL OF BIOGEOGRAPHY, Issue 10 2006
Richard G. Pearson
Abstract Aim, Many attempts to predict the potential range of species rely on environmental niche (or ,bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location, The Western Cape of South Africa. Methods, We applied nine of the most widely used modelling techniques to model potential distributions under current and predicted future climate for four species (including two subspecies) of Proteaceae. Each model was built using an identical set of five input variables and distribution data for 3996 sampled sites. We compare model predictions by testing agreement between observed and simulated distributions for the present day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results, Our analyses show significant differences between predictions from different models, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions, We highlight an important source of uncertainty in assessments of the impacts of climate change on biodiversity and emphasize that model predictions should be interpreted in policy-guiding applications along with a full appreciation of uncertainty. [source]


Experimental and neural model analysis of styrene removal from polluted air in a biofilter

JOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 7 2009
Eldon R. Rene
Abstract BACKGROUND: Biofilters are efficient systems for treating malodorous emissions. The mechanism involved during pollutant transfer and subsequent biotransformation within a biofilm is a complex process. The use of artificial neural networks to model the performance of biofilters using easily measurable state variables appears to be an effective alternative to conventional phenomenological modelling. RESULTS: An artificial neural network model was used to predict the extent of styrene removal in a perlite-biofilter inoculated with a mixed microbial culture. After a 43 day biofilter acclimation period, styrene removal experiments were carried out by subjecting the bioreactor to different flow rates (0.15,0.9 m3 h,1) and concentrations (0.5,17.2 g m,3), that correspond to inlet loading rates up to 1390 g m,3 h,1. During the different phases of continuous biofilter operation, greater than 92% styrene removal was achievable for loading rates up to 250 g m,3 h,1. A back propagation neural network algorithm was applied to model and predict the removal efficiency (%) of this process using inlet concentration (g m,3) and unit flow (h,1) as input variables. The data points were divided into training (115 × 3) and testing set (42 × 3). The most reliable condition for the network was selected by a trial and error approach and by estimating the determination coefficient (R2) value (0.98) achieved during prediction of the testing set. CONCLUSION: The results showed that a simple neural network based model with a topology of 2,4,1 was able to efficiently predict the styrene removal performance in the biofilter. Through sensitivity analysis, the most influential input parameter affecting styrene removal was ascertained to be the flow rate. Copyright © 2009 Society of Chemical Industry [source]


Red cell exchange transfusion for babesiosis in Rhode Island

JOURNAL OF CLINICAL APHERESIS, Issue 3 2009
Joshua Spaete
Abstract We report four cases of clinically severe tick borne babesiosis treated with chemotherapy and adjunctive red cell exchange (RCE) at two Rhode Island hospitals from 2004 to 2007. All RCE procedures were performed using a Cobe Spectra device and were well tolerated without complications. The volume of allogeneic red cells used in the exchange was determined using the algorithm in the apheresis device with the input variables of preprocedure hematocrit, weight, height, an assumed allogeneic red cell hematocrit of 55 and a desired post procedure hematocrit of 27. The preprocedure level of parasitemia varied between 2.4% and 24% and the postprocedure level of parasitemia between 0.4 and 5.5% with an average overall percent reduction in parasitemia of 74%. Retrospectively, application of a new formula to calculate red cell mass appeared to correlate better with the percent reduction in parasitemia. Previous reports of RCE in babesiosis were reviewed. The reported reduction in parasitemia varied from 50% to >90%. Although a preprocedure level of parasitemia of 10% is sometimes used as a threshold for RCE in clinically severe babesiosis, this threshold does not have a firm empirical basis. No postprocedure desired level of parasitemia is indicated nor the mass of allogeneic red cells needed to achieve such a level. We conclude that current estimates of the dose of allogeneic red cells used in RCE are probably inaccurate, advocate a new formula to estimate this dose and suggest that a 90% reduction in parasitemia should be the minimally desired target of RCE in babesiosis. J. Clin. Apheresis, 2009. © 2009 Wiley-Liss, Inc. [source]