Model Construction (model + construction)

Distribution by Scientific Domains


Selected Abstracts


Financial decision support using neural networks and support vector machines

EXPERT SYSTEMS, Issue 4 2008
Chih-Fong Tsai
Abstract: Bankruptcy prediction and credit scoring are the two important problems facing financial decision support. The multilayer perceptron (MLP) network has shown its applicability to these problems and its performance is usually superior to those of other traditional statistical models. Support vector machines (SVMs) are the core machine learning techniques and have been used to compare with MLP as the benchmark. However, the performance of SVMs is not fully understood in the literature because an insufficient number of data sets is considered and different kernel functions are used to train the SVMs. In this paper, four public data sets are used. In particular, three different sizes of training and testing data in each of the four data sets are considered (i.e. 3:7, 1:1 and 7:3) in order to examine and fully understand the performance of SVMs. For SVM model construction, the linear, radial basis function and polynomial kernel functions are used to construct the SVMs. Using MLP as the benchmark, the SVM classifier only performs better in one of the four data sets. On the other hand, the prediction results of the MLP and SVM classifiers are not significantly different for the three different sizes of training and testing data. [source]


Forecasting migration of cereal aphids (Hemiptera: Aphididae) in autumn and spring

JOURNAL OF APPLIED ENTOMOLOGY, Issue 5 2009
A. M. Klueken
Abstract The migration of cereal aphids and the time of their arrival on winter cereal crops in autumn and spring are of particular importance for plant disease (e.g. barley yellow dwarf virus infection) and related yield losses. In order to identify days with migration potentials in autumn and spring, suction trap data from 29 and 45 case studies (locations and years), respectively, were set-off against meteorological parameters, focusing on the early immigration periods in autumn (22 September to 1 November) and spring (1 May to 9 June). The number of cereal aphids caught in a suction trap increased with increasing temperature, global radiation and duration of sunshine and decreased with increasing precipitation, relative humidity and wind speed. According to linear regression analyses, the temperature, global radiation and wind speed were most frequently and significantly associated with migration, suggesting that they have a major impact on flight activity. For subsequent model development, suction trap catches from different case studies were pooled and binarily classified as days with or without migration as defined by a certain number of migrating cereal aphids. Linear discriminant analyses of several predictor variables (assessed during light hours of a given day) were then performed based on the binary response variables. Three models were used to predict days with suction trap catches ,1, ,4 or ,10 migrating cereal aphids in autumn. Due to the predominance of Rhopalosiphum padi individuals (99.3% of total cereal aphid catch), no distinction between species (R. padi and Sitobion avenae) was made in autumn. As the suction trap catches were lower and species dominance changed in spring, three further models were developed for analysis of all cereal aphid species, R. padi only, and Metopolophium dirhodum and S. avenae combined in spring. The empirical, cross-classification and receiver operating characteristic analyses performed for model validation showed different levels of prediction accuracy. Additional datasets selected at random before model construction and parameterization showed that predictions by the six migration models were 33,81% correct. The models are useful for determining when to start field evaluations. Furthermore, they provide information on the size of the migrating aphid population and, thus, on the importance of immigration for early aphid population development in cereal crops in a given season. [source]


Who benefits from learning with 3D models? the case of spatial ability

JOURNAL OF COMPUTER ASSISTED LEARNING, Issue 6 2006
T. Huk
Abstract Empirical studies that focus on the impact of three-dimensional (3D) visualizations on learning are to date rare and inconsistent. According to the ability-as-enhancer hypothesis, high spatial ability learners should benefit particularly as they have enough cognitive capacity left for mental model construction. In contrast, the ability-as-compensator hypothesis proposes that low spatial ability learners should gain particular benefit from explicit graphical representations as they have difficulty mentally constructing their own visualizations. This study examines the impact that interactive 3D models implemented within a hypermedia-learning environment have on understanding of cell biology. Test scores in a subsequent knowledge acquisition test demonstrated a significant interaction term between students' spatial ability and presence/absence of 3D models. Only students with high spatial ability benefited from the presence of 3D models, while low spatial ability students got fewer points when learning this way. When using 3D models, high spatial ability students perceived their cognitive load to be low whereas the opposite was true for low spatial ability students. The data suggest that students with low spatial ability became cognitively overloaded by the presence of 3D models, while high spatial ability students benefited from them as their total cognitive load remained within working memory limits. [source]


A centroid-based sampling strategy for kriging global modeling and optimization

AICHE JOURNAL, Issue 1 2010
Eddie Davis
Abstract A new sampling strategy is presented for kriging-based global modeling. The strategy is used within a kriging/response surface (RSM) algorithm for solving NLP containing black-box models. Black-box models describe systems lacking the closed-form equations necessary for conventional gradient-based optimization. System optima can be alternatively found by building iteratively updated kriging models, and then refining local solutions using RSM. The application of the new sampling strategy results in accurate global model generation at lower sampling expense relative to a strategy using randomized and heuristic-based sampling for initial and subsequent model construction, respectively. The new strategy relies on construction of an initial kriging model built using sampling data obtained at the feasible region's convex polytope vertices and centroid. Updated models are constructed using additional sampling information obtained at Delaunay triangulation centroids. The new sampling algorithm is applied within the kriging-RSM framework to several numerical examples and case studies to demonstrate proof of concept. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


Qualitative in situ analysis of multiple solid-state forms using spectroscopy and partial least squares discriminant modeling

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 7 2007
Karin Kogermann
Abstract This study used in situ spectroscopy to reveal the multiple solid-state forms that appear during isothermal dehydration. Hydrate forms of piroxicam and carbamazepine (CBZ) were investigated on hot-stage at different temperatures using near-infrared (NIR) and Raman spectroscopy combined with multivariate modeling. Variable temperature X-ray powder diffraction, differential scanning calorimetry, thermogravimetric analysis, and Karl Fisher titrimetry were used as reference methods. Partial least squares discriminant analysis (PLS-DA) was performed to qualitatively evaluate the phase transition. It was shown that the constructed PLS-DA models, where spectral differences were directly correlated to solid-state modifications, enabled differentiation between the multiple forms. Qualitative analysis revealed that during dehydration, hydrates, such as CBZ dihydrate, may go through several solid-state forms, which must be considered in quantitative model construction. This study demonstrates that in situ analysis can be used to monitor the dehydration and reveal associated solid-state forms prior to quantification. The utility of the complementary spectroscopic techniques, NIR and Raman, have been shown. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96: 1802,1820, 2007 [source]


pH measurement and a rational and practical pH control strategy for high throughput cell culture system

BIOTECHNOLOGY PROGRESS, Issue 3 2010
Haiying Zhou
Abstract The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO2, HCO, and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO2 from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source]


A new traffic model for backbone networks and its application to performance analysis

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2008
Ming Yu
Abstract In this paper, we present a new traffic model constructed from a random number of shifting level processes (SLP) aggregated over time, in which the lengths of the active periods of the SLP are of Pareto or truncated Pareto distribution. For both cases, the model has been proved to be asymptotically second-order self-similar. However, based on extensive traffic data we collected from a backbone network, we find that the active periods of the constructing SLPs can be approximated better by a truncated Pareto distribution, instead of the Pareto distribution as assumed in existing traffic model constructions. The queueing problem of a single server fed with a traffic described by the model is equivalently converted to a problem with a traffic described by Norros' model. For the tail probability of the queue length distribution, an approximate expression and upper bound have been found in terms of large deviation estimates and are mathematically more tractable than existing results. The effectiveness of the traffic model and performance results are demonstrated by our simulations and experimental studies on a backbone network. Copyright © 2007 John Wiley & Sons, Ltd. [source]


The Authentic Consent Model: contractarianism, Creditors' Bargain, and corporate liquidation

LEGAL STUDIES, Issue 3 2001
Rizwaan Jameel Mokal
The first part of this paper asks if the Creditors' Bargain Model, long employed by insolvency scholars as the starting point for many an analysis, can explain or justify even the most distinctive and fundamental feature of insolvency law. After examining the defining features of the model's construction, the role of self-interest and consent in it, and its ex ante position, it is concluded that the Bargain model can neither explain nor legitimate the coercive collective liquidation regime. The second part of the paper develops an alternative model to analyse and justify insolvency law. The starting premise is that all (but only) those affected by issues peculiarly governed by insolvency law are to be given a choice in selecting the principles which would determine their rights and obligations. Once these parties have been identified, they are to be given equal weight in the selection process, since their legal status (whether they are employees, secured or unsecured creditors, etc), wealth, cognitive abilities, and bargaining strength are all morally irrelevant in framing rules of justice. This part of the paper introduces the notion of a constructive attribute, characteristics this society accepts its citizens should have in their role as legislators. So all parties affected by insolvency issues are regarded as free, equal, and reasonable. The model sketched out in this part of the article requires all principles to be selected from its choice position. Here, all the parties are deprived of any knowledge of personal attributes, and must reason rationally. It is shown that parties in the choice position would in fact choose the principles laying down the automatic stay on unsecured claims. The paper concludes with the demonstration that because of the construction of the choice position and the constructive attributes of the parties bargaining in it, the principles chosen are fair and just, and chosen in exercise of the parties' autonomy. As it happens, they are also efficient. [source]