Distribution by Scientific Domains

Selected Abstracts


While most economists expect some marginal conditions to result from basic expected value models involving government expenditures and homeland security investments, such models are not readily found in the literature. The article presents six basic models that all incorporate uncertainty; they also capture various problems involving technological limits, behavioral interactions, false negatives and false positives, and decision making with uncertainty and irreversibility. Recent reviews of homeland security programs by the U.S. Government Accountability Office are used to illustrate the relevance of the models.(JEL H100) [source]

Reduced Models of Impurity Seeded Edge Plasmas

D. Kh.
Abstract The reduced descriptions of the distribution of impurities over ionization states, radiation losses and plasma dynamics are reviewed. Two and three most important ion approximation for light impurities and continuous descriptions of heavy impurities are discussed. Reduced descriptions of atomic processes like ionization, photo- and dielectronic recombination rates as well as of radiation abilities are proposed. As it shown, thermal forces, final relaxation times of impurity distributions over ionization states, charge-exchange and opacity effects must be taken into account in reduced models, especially for ITER problems. Linear and nonlinear stages of the radiation-condensation mode as well as some aspects of disruptions and noble gas injection into tokamak plasmas are analyzed with the reduced models. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

New Developments of Self-emitting Electrostatic Probes for use in High Temperature Plasmas

M. A. Fink
Abstract Emissive electrostatic probes for the use in fusion experiments must be able to sustain significantly higher thermal loads than in low-temperature plasma experiments. Several types of probe design are discussed, the results from the use of such probes in the edge plasma of the Wendelstein 7-AS stellarator are presented and compared with the predictions of emissive and non-emissive probe models. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Adiabatic bond charge model for lattice dynamics of ternary chalcopyrite semiconductors

T. Gürel
Abstract The adiabatic bond charge model of Rustagi and Weber is extended to study lattice dynamical properties of ternary chalcopyrite semiconductors AgGaS2, AgGaSe2, CuInS2, CuInSe2, CuGaS2, CuGaSe2, CuAlS2 and CuAlSe2. The new model calculations agree well with the results of Raman/IR and neutron measurements of Brillouin zone center phonon frequencies for both low and high frequency modes which was difficult for other phenomenological lattice dynamical models. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Do changes in climate patterns in wintering areas affect the timing of the spring arrival of trans-Saharan migrant birds?

Oscar Gordo
Abstract The life cycles of plants and animals are changing around the world in line with the predictions originated from hypotheses concerning the impact of global warming and climate change on biological systems. Commonly, the search for ecological mechanisms behind the observed changes in bird phenology has focused on the analysis of climatic patterns from the species breeding grounds. However, the ecology of bird migration suggests that the spring arrival of long-distance migrants (such as trans-Saharan birds) is more likely to be influenced by climate conditions in wintering areas given their direct impact on the onset of migration and its progression. We tested this hypothesis by analysing the first arrival dates (FADs) of six trans-Saharan migrants (cuckoo Cuculus canorus, swift Apus apus, hoopoe Upupa epops, swallow Hirundo rustica, house martin Delichon urbica and nightingale Luscinia megarhynchos), in a western Mediterranean area since from 1952 to 2003. By means of multiple regression analyses, FADs were analysed in relation to the monthly temperature and precipitation patterns of five African climatic regions south of the Sahara where species are thought to overwinter and from the European site from where FADs were collected. We obtained significant models for five species explaining 9,41% of the variation in FADs. The interpretation of the models suggests that: (1) The climate in wintering quarters, especially the precipitation, has a stronger influence on FADs than that in the species' potential European breeding grounds. (2) The accumulative effects of climate patterns prior to migration onset may be of considerable importance since those climate variables that served to summarize climate patterns 12 months prior to the onset of migration were selected by final models. (3) Temperature and precipitation in African regions are likely to affect departure decision in the species studied through their indirect effects on food availability and the build-up of reserves for migration. Our results concerning the factors that affect the arrival times of trans-Saharan migrants indicate that the effects of climate change are more complex than previously suggested, and that these effects might have an interacting impact on species ecology, for example by reversing ecological pressures during species' life cycles. [source]

Head and neck squamous cell carcinoma cell lines: Established models and rationale for selection

Charles J. Lin BA
Abstract Background. Head and neck squamous cell carcinoma (HNSCC) cell lines are important preclinical models in the search for novel and targeted therapies to treat head and neck cancer. Unlike many other cancer types, a wide variety of primary and metastatic HNSCC cell lines are available. An easily accessible guide that organizes important characteristics of HNSCC cell lines would be valuable for the selection of appropriate HNSCC cell lines for in vitro or in vivo studies. Methods. A literature search was performed. Results. Cell growth and culture parameters from HNSCC cell lines were catalogued into tables or lists of selected characteristics. Methods for establishing cancer cell lines and basic cell culture maintenance techniques were reviewed. Conclusions. A compendium of HNSCC cell line characteristics is useful for organizing the accumulating information regarding cell line characteristics to assist investigators with the development of appropriate preclinical models. © 2006 Wiley Periodicals, Inc. Head Neck, 2006 [source]

Experimental measurements and kinetic modeling of CO/H2/O2/NOx conversion at high pressure,

Christian Lund Rasmussen
This paper presents results from lean CO/H2/O2/NOx oxidation experiments conducted at 20,100 bar and 600,900 K. The experiments were carried out in a new high-pressure laminar flow reactor designed to conduct well-defined experimental investigations of homogeneous gas phase chemistry at pressures and temperatures up to 100 bar and 925 K. The results have been interpreted in terms of an updated detailed chemical kinetic model, designed to operate also at high pressures. The model, describing H2/O2, CO/CO2, and NOx chemistry, is developed from a critical review of data for individual elementary reactions, with supplementary rate constants determined from ab initio CBS-QB3 calculations. New or updated rate constants are proposed for important reactions, including OH + HO2 , H2O + O2, CO + OH , [HOCO] , CO2 + H, HOCO + OH , CO + H2O2, NO2 + H2 , HNO2 + H, NO2 + HO2 , HONO/HNO2 + O2, and HNO2(+M) , HONO(+M). Further validation of the model performance is obtained through comparisons with flow reactor experiments from the literature on the chemical systems H2/O2, H2/O2/NO2, and CO/H2O/O2 at 780,1100 K and 1,10 bar. Moreover, introduction of the reaction CO + H2O2 , HOCO + OH into the model yields an improved prediction, but no final resolution, to the recently debated syngas ignition delay problem compared to previous kinetic models. © 2008 Wiley Periodicals, Inc. Int J Chem Kinet 40: 454,480, 2008 [source]

Model-based shape from shading for microelectronics applications

A. Nissenboim
Abstract Model-based shape from shading (SFS) is a promising paradigm introduced by Atick et al. [Neural Comput 8 (1996), 1321,1340] in 1996 for solving inverse problems when we happen to have a lot of prior information on the depth profiles to be recovered. In the present work we adopt this approach to address the problem of recovering wafer profiles from images taken using a scanning electron microscope (SEM). This problem arises naturally in the microelectronics inspection industry. A low-dimensional model, based on our prior knowledge on the types of depth profiles of wafer surfaces, has been developed, and based on it the SFS problem becomes an optimal parameter estimation. Wavelet techniques were then employed to calculate a good initial guess to be used in a minimization process that yields the desired profile parametrization. A Levenberg,Marguardt (LM) optimization procedure has been adopted to address ill-posedness of the SFS problem and to ensure stable numerical convergence. The proposed algorithm has been tested on synthetic images, using both Lambertian and SEM imaging models. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 65,76, 2006 [source]

Toward better scoring metrics for pseudo-independent models

Y. Xiang
Learning belief networks from data is NP-hard in general. A common method used in heuristic learning is the single-link lookahead search. When the problem domain is pseudo-independent (PI), the method cannot discover the underlying probabilistic model. In learning these models, to explicitly trade model accuracy and model complexity, parameterization of PI models is necessary. Understanding of PI models also provides a new dimension of trade-off in learning even when the underlying model may not be PI. In this work, we adopt a hypercube perspective to analyze PI models and derive an improved result for computing the maximum number of parameters needed to specify a full PI model. We also present results on parameterization of a subclass of partial PI models. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 749,768, 2004. [source]

How practitioners can systematically use empirical evidence in treatment selection

Larry E. Beutler
Contemporary concerns with "empirically supported treatments" emphasize the differences in outcomes that are associated with reliably delivered treatments, representing different models and theories. This approach often fails to address the fact that there is no consensus among scientists about whether there are enough differences between and among treatments to make this effort productive. There is a considerable body of data that suggests that all treatments produce very similar effects. This article reviews these viewpoints and presents a third position, suggesting that identifying common and differential principles of change may be more productive than focusing on the relative value of different theoretical models. © 2002 Wiley Periodicals, Inc. J Clin Psychol 58: 1199,1212, 2002. [source]

Is EMDR an exposure therapy?

A review of trauma protocols
This article presents the well established theoretical base and clinical practice of exposure therapy for trauma. Necessary requirements for positive treatment results and contraindicated procedures are reviewed. EMDR is contrasted with these requirements and procedures. By the definitions and clinical practice of exposure therapy, the classification of EMDR poses some problems. As seen from the exposure therapy paradigm, its lack of physiological habituation and use of spontaneous association should result in negligible or negative effects rather than the well researched positive outcomes. Possible reasons for the effectiveness of EMDR are discussed, ranging from the fundamental nature of trauma reactions to the nonexposure mechanisms utilized in information processing models. © 2002 John Wiley & Sons, Inc. J Clin Psychol 58: 43,59, 2002. [source]

A new force field for simulating phosphatidylcholine bilayers

David Poger
Abstract A new force field for the simulation of dipalmitoylphosphatidylcholine (DPPC) in the liquid-crystalline, fluid phase at zero surface tension is presented. The structure of the bilayer with the area per lipid (0.629 nm2; experiment 0.629,0.64 nm2), the volume per lipid (1.226 nm3; experiment 1.229,1.232 nm3), and the ordering of the palmitoyl chains (order parameters) are all in very good agreement with experiment. Experimental electron density profiles are well reproduced in particular with regard to the penetration of water into the bilayer. The force field was further validated by simulating the spontaneous assembly of DPPC into a bilayer in water. Notably, the timescale on which membrane sealing was observed using this model appears closer to the timescales for membrane resealing suggested by electroporation experiments than previous simulations using existing models. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]

Prediction of octanol,water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network

Hassan Golmohammadi
Abstract A quantitative structure,property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol,water partition coefficients (log Po/w). A genetic algorithm was applied as a variable selection tool. Modeling of log Po/w of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (,), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2009 [source]

Interesting properties of Thomas,Fermi kinetic and Parr electron,electron-repulsion DFT energy functional generated compact one-electron density approximation for ground-state electronic energy of molecular systems

Sandor Kristyan
Abstract The reduction of the electronic Schrodinger equation or its calculating algorithm from 4N -dimensions to a (nonlinear, approximate) density functional of three spatial dimension one-electron density for an N -electron system, which is tractable in the practice, is a long desired goal in electronic structure calculation. If the Thomas-Fermi kinetic energy (,,,5/3dr1) and Parr electron,electron repulsion energy (,,,4/3dr1) main-term functionals are accepted, and they should, the later described, compact one-electron density approximation for calculating ground state electronic energy from the 2nd Hohenberg,Kohn theorem is also noticeable, because it is a certain consequence of the aforementioned two basic functionals. Its two parameters have been fitted to neutral and ionic atoms, which are transferable to molecules when one uses it for estimating ground-state electronic energy. The convergence is proportional to the number of nuclei (M) needing low disc space usage and numerical integration. Its properties are discussed and compared with known ab initio methods, and for energy differences (here atomic ionization potentials) it is comparable or sometimes gives better result than those. It does not reach the chemical accuracy for total electronic energy, but beside its amusing simplicity, it is interesting in theoretical point of view, and can serve as generator function for more accurate one-electron density models. © 2008 Wiley Periodicals, Inc. J Comput Chem 2009 [source]

Quick scheme for evaluation of atomic charges in arbitrary aluminophosphate sieves on the basis of electron densities calculated with DFT methods

A. V. Larin
Abstract It is demonstrated that unique and simple analytical functions are justified for the atomic charge dependences q of the T (T = Al, P) and O atoms of aluminophosphates (AlPOs) using DFT calculations with several basis sets, starting from STO-3G to 3-21G and 6-21G**. Three internal (bonds, angles, ,) coordinates for the charge dependences of the T atoms and four coordinates for the O are sufficient to reach a precision of 1.8% for the fitted q(Al), 1.0% for q(P), and 2.5% for q(O) relatively to the values calculated at any basis set level. The proposed strategy consists in an iterative scheme starting from charge dependences based on the neighbor's positions only. Electrostatic potential values are computed to illustrate the differences between the calculated and fitted charges in the considered AlPO models. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2007 [source]

Food trade balances and unit values: What can they reveal about price competition?

Mark J. Gehlhar
Price competition is a fundamental assumption in modeling trade. Empirical applications often use unit values as proxies for price. This is a problem if unit values cannot explain trade flows consistent with the price competition assumption. The paper determines whether this condition exists in food product trade. Trade balances by product are used to indicate successful competition in trade. Export and import unit values are used to determine if competition is dominated by price or nonprice competition. Trade flows are then categorized in four ways: successful price competition, unsuccessful price competition, successful nonprice competition, and unsuccessful nonprice competition. This categorization is applied to 372 food products using the Standard International Trade Classification. Nearly 40% of U.S. food exports could be characterized as dominated by nonprice competition. In those instances, we contend that unit values are not valid proxies for price, thereby limiting their usefulness in traditional import demand estimation and trade policy simulation models. © 2002 Wiley Periodicals, Inc. [source]

Forecasting interest rate swap spreads using domestic and international risk factors: evidence from linear and non-linear models

Ilias Lekkos
Abstract This paper explores the ability of factor models to predict the dynamics of US and UK interest rate swap spreads within a linear and a non-linear framework. We reject linearity for the US and UK swap spreads in favour of a regime-switching smooth transition vector autoregressive (STVAR) model, where the switching between regimes is controlled by the slope of the US term structure of interest rates. We compare the ability of the STVAR model to predict swap spreads with that of a non-linear nearest-neighbours model as well as that of linear AR and VAR models. We find some evidence that the non-linear models predict better than the linear ones. At short horizons, the nearest-neighbours (NN) model predicts better than the STVAR model US swap spreads in periods of increasing risk conditions and UK swap spreads in periods of decreasing risk conditions. At long horizons, the STVAR model increases its forecasting ability over the linear models, whereas the NN model does not outperform the rest of the models.,,Copyright © 2007 John Wiley & Sons, Ltd. [source]

Long-memory dynamic Tobit models

A. E. Brockwell
Abstract We introduce a long-memory dynamic Tobit model, defining it as a censored version of a fractionally integrated Gaussian ARMA model, which may include seasonal components and/or additional regression variables. Parameter estimation for such a model using standard techniques is typically infeasible, since the model is not Markovian, cannot be expressed in a finite-dimensional state-space form, and includes censored observations. Furthermore, the long-memory property renders a standard Gibbs sampling scheme impractical. Therefore we introduce a new Markov chain Monte Carlo sampling scheme, which is orders of magnitude more efficient than the standard Gibbs sampler. The method is inherently capable of handling missing observations. In case studies, the model is fit to two time series: one consisting of volumes of requests to a hard disk over time, and the other consisting of hourly rainfall measurements in Edinburgh over a 2-year period. The resulting posterior distributions for the fractional differencing parameter demonstrate, for these two time series, the importance of the long-memory structure in the models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]

Assessing the forecasting accuracy of alternative nominal exchange rate models: the case of long memory

David Karemera
Abstract This paper presents an autoregressive fractionally integrated moving-average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long-memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long-memory model is more efficient than the random walk model in steps-ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi-step-ahead forecasts. This new finding strongly suggests that the long-memory model of nominal exchange rates be studied as a viable alternative to the conventional models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]

Planning models for parallel batch reactors with sequence-dependent changeovers

AICHE JOURNAL, Issue 9 2007
Muge Erdirik-Dogan
Abstract In this article we address the production planning of parallel multiproduct batch reactors with sequence-dependent changeovers, a challenging problem that has been motivated by a real-world application of a specialty chemicals business. We propose two production planning models that anticipate the impact of the changeovers in this batch processing problem. The first model is based on underestimating the effects of the changeovers that leads to an MILP problem of moderate size. The second model incorporates sequencing constraints that yield very accurate predictions, but at the expense of a larger MILP problem. To solve large scale problems in terms of number of products and reactors, or length of the time horizon, we propose a decomposition technique based on rolling horizon scheme and also a relaxation of the detailed planning model. Several examples are presented to illustrate the performance of the proposed models. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source]

Induction of a neoarthrosis by precisely controlled motion in an experimental mid-femoral defect

Dennis M. Cullinane
Bone regeneration during fracture healing has been demonstrated repeatedly, yet the regeneration of articular cartilage and joints has not yet been achieved. It has been recognized however that the mechanical environment during fracture healing can be correlated to the contributions of either the endochondral or intramembranous processes of bone formation, and to resultant tissue architecture. Using this information, the goal of this study was to test the hypothesis that induced motion can directly regulate osteogenic and chondrogenic tissue formation in a rat mid-femoral bone defect and thereby influence the anatomical result. Sixteen male Sprague Dawley rats (400 ± 20 g) underwent production of a mid-diaphyseal, non-critical sized 3.0 mm segmental femoral defect with rigid external fixation using a custom designed four pin fixator. One group of eight animals represented the controls and underwent surgery and constant rigid fixation. In the treatment group the custom external fixator was used to introduce daily interfragmentary bending strain in the eight treatment animals (12°s angular excursion), with a hypothetical symmetrical bending load centered within the gap. The eight animals in the treatment group received motion at 1.0 Hz, for 10 min a day, with a 3 days on, one day off loading protocol for the first two weeks, and 2 days on, one day off for the remaining three weeks. Data collection included histological and immunohistological identification of tissue types, and mean collagen fiber angles and angular conformity between individual fibers in superficial, intermediate, and deep zones within the cartilage. These parameters were compared between the treatment group, rat knee articular cartilage, and the control group as a structural outcome assessment. After 35 days the control animals demonstrated varying degrees of osseous union of the defect with some animals showing partial union. In every individual within the mechanical treatment group the defect completely failed to unite. Bony arcades developed in the experimental group, capping the termini of the bone segments on both sides of the defect in four out of six animals completing the study. These new structures were typically covered with cartilage, as identified by specific histological staining for Type II collagen and proteoglycans. The distribution of collagen within analogous superficial, intermediate, and deep zones of the newly formed cartilage tissue demonstrated preferred fiber angles consistent with those seen in articular cartilage. Although not resulting in complete joint development, these neoarthroses show that the induced motion selectively controlled the formation of cartilage and bone during fracture repair, and that it can be specifically directed. They further demonstrate that the spatial organization of molecular components within the newly formed tissue, at both microanatomical and gross levels, are influenced by their local mechanical environment, confirming previous theoretical models. © 2002 Orthopaedic Research Society. Published by Elsevier Science Ltd. All rights reserved. [source]

Air-liquid interface (ALI) culture of human bronchial epithelial cell monolayers as an in vitro model for airway drug transport studies

Hongxia Lin
Abstract Serially passaged normal human bronchial epithelial (NHBE) cell monolayers were established on Transwell® inserts via an air-liquid interface (ALI) culture method. NHBE cells were seeded on polyester Transwell® inserts, followed by an ALI culture from day 3, which resulted in peak TEER value of 766,±,154 ,,×,cm2 on the 8th day. Morphological characteristics were observed by light microscopy and SEM, while the formation of tight junctions was visualized by actin staining, and confirmed successful formation of a tight monolayer. The transepithelial permeability (Papp) of model drugs significantly increased with the increase of lipophilicity and showed a good linear relationship, which indicated that lipophilicity is an important factor in determining the Papp value. The expression of P-gp transporter in NHBE cell monolayers was confirmed by the significantly higher basolateral to apical permeability of rhodamine123 than that of reverse direction and RT-PCR of MDR1 mRNA. However, the symmetric transport of fexofenadine,·,HCl in this NHBE cell monolayers study seems to be due to the low expression of P-gp transporter and/or to its saturation with high concentration of fexofenadine,·,HCl. Thus, the development of tight junction and the expression of P-gp in the NHBE cell monolayers in this study imply that they could be a suitable in vitro model for evaluation of systemic drug absorption via airway delivery, and that they reflect in vivo condition better than P-gp over-expressed cell line models. ©2006 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96:341,350, 2007 [source]

Filling certain cuts in discrete weakly o-minimal structures

Stefano Leonesi
Abstract Discrete weakly o-minimal structures, although not so stimulating as their dense counterparts, do exhibit a certain wealth of examples and pathologies. For instance they lack prime models and monotonicity for definable functions, and are not preserved by elementary equivalence. First we exhibit these features. Then we consider a countable theory of weakly o-minimal structures with infinite definable discrete (convex) subsets and we study the Boolean algebra of definable sets of its countable models. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Structural properties of network revenue management models: An economic perspective

Alec Morton
Abstract Many revenue management problems have a network aspect. In this paper, we argue that a network can be thought of as a system of substitutable and complementary products, and the value of a revenue management model should be supermodular or submodular in the availability of two resources as the resources are economic substitutes or complements. We demonstrate that this is true in the case of a two-resource dynamic stochastic revenue management model and show how this applies for multi-resource deterministic static revenue management models. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source]

Relations and disproportions: The labor of scholarship in the knowledge economy

ABSTRACT In this article, I provide an ethnographic exploration of some of the terms for imagining knowledge in today's "knowledge society," and I attempt to situate the kind of "sociology of knowledge" behind this imagination. In particular, I am interested in the sociological imagination of knowledge in terms of a relational economy, in which knowledge flows uninterruptedly to create and shape what Yochai Benkler has dubbed "the wealth of networks." I pursue this interest through an ethnography of the production of research among humanities scholars at Spain's National Research Council (CSIC). For CSIC's human scientists, books (and other bookish analogues, such as libraries or manuscript collections) occupy a place of prominence in the institutional production of research. This economy of scholarship (between books, between people and books, and between what books do and what institutions and researchers imagine them to do) finds itself at a "disproportionate" distance from the "network economy of information" encountered in the literature on the knowledge economy and promoted in certain circles within CSIC. I contrast the epistemological economies of CSIC scientists' relational and disproportional views on research and, ultimately, attempt to provide an anthropological description of a contemporary sociology of knowledge, including its analytical categories and models. [knowledge, knowledge economy, relations, proportionality, labor, academia] [source]

Convergence acceleration by varying time-step size using Bi-CGSTAB method for turbulent flow computation

W. B. Tsai
Abstract A design of varying step size approach both in time span and spatial coordinate systems to achieve fast convergence is demonstrated in this study. This method is based on the concept of minimization of residuals by the Bi-CGSTAB algorithm, so that the convergence can be enforced by varying the time-step size. The numerical results show that the time-step size determined by the proposed method improves the convergence rate for turbulent computations using advanced turbulence models in low Reynolds-number form, and the degree of improvement increases with the degree of the complexity of the turbulence models. © 2001 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 17: 454,474, 2001. [source]

Approaches to learning on placement: the students' perspective

Clare Kell
Abstract Background and Purpose.,With Continuing Professional Development activity, a requirement of Allied Health Professional registration in the UK and said to be most effectively supported by practitioners who adopt a deep approach to learning, a UK university has been exploring how its pre-registration curriculum influences learner development. This paper investigates the possible influences of the clinical placement component of the curriculum that is structured as four 4-week blocks during both Years 2 and 3 of the 3-year BSc (Hons) programme. A range of placement models are used within this structure including the traditional 1:1 educator,:,student ratio and those that have a higher ratio of student(s),:,educator(s).,Methods.,This phase of the larger project used a case study design framed about students from two academic year groups on one UK undergraduate, pre-registration physiotherapy programme. Three questionnaires comprising a learning approaches inventory, a demographic questionnaire and a placement self-assessment form were posted to Year 2 and 3 students during one clinical placement. The students were invited to complete the questionnaires halfway through their placement, but in advance of the first, formal placement education feedback meeting. The need for students' self-assessment prevented follow-up data collection.,Results.,Analysis of the data from the learning approaches inventory against the demographic variables and placement assessment scores suggest that students' learning strategies depend upon the number of students, educators and assessors involved in their placement. The paper explores the possible links between placement experience, learning strategy and academic outcome. The authors question assumptions about the perceived benefits of some placement education models.,Conclusion.,Increasing the ratio of student,:,educator or educator,:,student may have a detrimental effect on students' learning development when placements are of 4-week duration. If such placement models are adopted, then students and placement educators must be adequately prepared and supported so that students' learning development towards the deep-learning autonomous professionals of tomorrow can continue through placement education. Copyright © 2008 John Wiley & Sons, Ltd. [source]

On rate independent models for crack propagation

Dorothee Knees
We model the evolution of a single crack as a rate,independent process based on the Griffith criterion. Three approaches are presented, namely a model based on global energy minimization, a model based on a local description involving the energy release rate and a refined local model which is the limit problem of regularized, viscous models. Finally we present an example which sheds light on the different predictions of the models. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Asymptotic equivalence and contiguity of some random graphs

Svante Janson
Abstract We show that asymptotic equivalence, in a strong form, holds between two random graph models with slightly differing edge probabilities under substantially weaker conditions than what might naively be expected. One application is a simple proof of a recent result by van den Esker, van der Hofstad, and Hooghiemstra on the equivalence between graph distances for some random graph models. © 2009 Wiley Periodicals, Inc. Random Struct. Alg., 2010 [source]

Dynamic hedging with futures: A copula-based GARCH model

Chih-Chiang Hsu
In a number of earlier studies it has been demonstrated that the traditional regression-based static approach is inappropriate for hedging with futures, with the result that a variety of alternative dynamic hedging strategies have emerged. In this study the authors propose a class of new copula-based GARCH models for the estimation of the optimal hedge ratio and compare their effectiveness with that of other hedging models, including the conventional static, the constant conditional correlation (CCC) GARCH, and the dynamic conditional correlation (DCC) GARCH models. With regard to the reduction of variance in the returns of hedged portfolios, the empirical results show that in both the in-sample and out-of-sample tests, with full flexibility in the distribution specifications, the copula-based GARCH models perform more effectively than other dynamic hedging models. © 2008 Wiley Periodicals, Inc. Jrl Fut Mark 28:1095,1116, 2008 [source]