General Form (general + form)

Distribution by Scientific Domains


Selected Abstracts


Polynomial basis functions on pyramidal elements

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 12 2008
M. J. Bluck
Abstract Pyramidal elements are necessary to effect the transition from tetrahedral to hexahedral elements, a common requirement in practical finite element applications. However, existing pyramidal transition elements suffer from degeneracy or other numerical difficulties, requiring, at the least, warnings and care in their use. This paper presents a general technique for the construction of nodal basis functions on pyramidal finite elements. General forms for basis functions of arbitrary order are presented. The basis functions so derived are fully conformal and free of degeneracy. Copyright © 2007 John Wiley & Sons, Ltd. [source]


The Many Facets of Identity Criteria

DIALECTICA, Issue 2 2004
Massimiliano Carrara
The aim of this note is to discuss the general form and role of identity criteria. We have taken two readings into consideration which express two different functions of identity criteria. The first expresses the epistemic function whilst the second deals with the ontological function. We argue that there are several problems related to the specification of both these functions. As a consequence, we conclude that identity criteria are not necessary to provide ontological legitimacy. [source]


Regional analysis of bedrock stream long profiles: evaluation of Hack's SL form, and formulation and assessment of an alternative (the DS form)

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2007
Geoff Goldrick
Abstract The equilibrium form of the fluvial long profile has been used to elucidate a wide range of aspects of landscape history including tectonic activity in tectonic collision zones, and in continental margin and other intraplate settings, as well as other base-level changes such as due to sealevel fluctuations. The Hack SL form of the long profile, which describes a straight line on a log,normal plot of elevation (normal) versus distance (logarithmic), is the equilibrium long profile form that has been most widely used in such studies; slope,area analysis has also been used in recent years. We show that the SL form is a special case of a more general form of the equilibrium long profile (here called the DS form) that can be derived from the power relationship between stream discharge and downstream distance, and the dependence of stream incision on stream power. The DS form provides a better fit than the SL form to river long profiles in an intraplate setting in southeastern Australia experiencing low rates of denudation and mild surface uplift. We conclude that, if an a priori form of the long profile is to be used for investigations of regional landscape history, the DS form is preferable. In particular, the DS form in principle enables equilibrium steepening due to an increase in channel substrate lithological resistance (parallel shift in the DS plot) to be distinguished from disequilibrium steepening due to long profile rejuvenation (disordered outliers on the DS plot). Slope,area analysis and the slope,distance (DS) approach outlined here are complementary approaches, reflecting the close relationship between downstream distance and downstream catchment area. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A mode II weight function for subsurface cracks in a two-dimensional half-space

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 10 2002
A. MAZZÙ
ABSTRACT The general properties of a mode II Weight Function for a subsurface crack in a two-dimensional half-space are discussed. A general form for the WF is proposed, and its analytical expression is deduced from the asymptotic properties of the displacements field near the crack tips and from some reference cases obtained by finite elements models. Although the WF has general validity, the main interest is on its application to the study of rolling contact fatigue: its properties are explored for a crack depth range within which the most common failure phenomena in rolling contact are experimentally observed, and for a crack length range within the field of short cracks. The accuracy is estimated by comparison with several results obtained by FEM models, and its validity in the crack depth range explored is shown. [source]


Contiguity Constraints for Single-Region Site Search Problems

GEOGRAPHICAL ANALYSIS, Issue 4 2000
Thomas J. Cova
This paper proposes an explicit set of constraints as a general approach to the contiguity problem in site search modeling. Site search models address the challenging problem of identifying the best area in a study region for a particular land use, given that there are no candidate sites. Criteria that commonly arise in a search include a site's area, suitability, cost, shape, and proximity to surrounding geographic features. An unsolved problem in this modeling arena is the identification of a general set of mathematical programming constraints that can guarantee a contiguous solution (site) for any 0,1 integer-programming site search formulation. The constraints proposed herein address this problem, and we evaluate their efficacy and efficiency in the context of a regular and irregular tessellation of geographic space. An especially efficient constraint form is derived from a more general form and similarly evaluated. The results demonstrate that the proposed constraints represent a viable, general approach to the contiguity problem. [source]


A unified bounding surface plasticity model for unsaturated soils

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2006
A.R. Russell
Abstract A unified constitutive model for unsaturated soils is presented in a critical state framework using the concepts of effective stress and bounding surface plasticity theory. Consideration is given to the effects of unsaturation and particle crushing in the definition of the critical state. A simple isotropic elastic rule is adopted. A loading surface and a bounding surface of the same shape are defined using simple and versatile functions. The bounding surface and elastic rules lead to the existence of a limiting isotropic compression line, towards which the stress trajectories of all isotropic compression load paths approach. A non-associated flow rule of the same general form is assumed for all soil types. Isotropic hardening/softening occurs due to changes in plastic volumetric strains as well as suction for some unsaturated soils, enabling the phenomenon of volumetric collapse upon wetting to be accounted for. The model is used to simulate the stress,strain behaviour observed in unsaturated speswhite kaolin subjected to three triaxial test load paths. The fit between simulation and experiment is improved compared to that of other constitutive models developed using conventional Cam-Clay-based plasticity theory and calibrated using the same set of data. Also, the model is used to simulate to a high degree of accuracy the stress,strain behaviour observed in unsaturated Kurnell sand subjected to two triaxial test load paths and the oedometric compression load path. For oedometric compression theoretical simulations indicate that the suction was not sufficiently large to cause samples to separate from the confining ring. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Ground response curves for rock masses exhibiting strain-softening behaviour

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2003
E. Alonso
Abstract A literature review has shown that there exist adequate techniques to obtain ground reaction curves for tunnels excavated in elastic-brittle and perfectly plastic materials. However, for strain-softening materials it seems that the problem has not been sufficiently analysed. In this paper, a one-dimensional numerical solution to obtain the ground reaction curve (GRC) for circular tunnels excavated in strain-softening materials is presented. The problem is formulated in a very general form and leads to a system of ordinary differential equations. By adequately defining a fictitious ,time' variable and re-scaling some variables the problem is converted into an initial value one, which can be solved numerically by a Runge,Kutta,Fehlberg method, which is implemented in MATLAB environment. The method has been developed for various common particular behaviour models including Tresca, Mohr,Coulomb and Hoek,Brown failure criteria, in all cases with non-associative flow rules and two-segment piecewise linear functions related to a principal strain-dependent plastic parameter to model the transition between peak and residual failure criteria. Some particular examples for the different failure criteria have been run, which agree well with closed-form solutions,if existing,or with FDM-based code results. Parametric studies and specific charts are created to highlight the influence of different parameters. The proposed methodology intends to be a wider and general numerical basis where standard and newly featured behaviour modes focusing on obtaining GRC for tunnels excavated in strain-softening materials can be implemented. This way of solving such problems has proved to be more efficient and less time consuming than using FEM- or FDM-based numerical 2D codes. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Solid,liquid,air coupling in multiphase porous media

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2003
Lyesse Laloui
Abstract This paper addresses various issues concerning the modelling of solid,liquid,air coupling in multiphase porous media with an application to unsaturated soils. General considerations based on thermodynamics permit the derivation and discussion of the general form of field equations; two cases are considered: a three phase porous material with solid, liquid and gas, and a two phase porous material with solid, liquid and empty space. Emphasis is placed on the presentation of differences in the formulation and on the role of the gas phase. The finite element method is used for the discrete approximation of the partial differential equations governing the problem. The two formulations are then analysed with respect to a documented drainage experiment carried out by the authors. The merits and shortcomings of the two approaches are shown. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Solutions of pore pressure build up due to progressive waves

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2001
L. Cheng
Abstract The analytical solution of soil pore pressure accumulations due to a progressive wave is examined in detail. First of all, the errors contained in a published analytical solution for wave-induced pore pressure accumulation are addressed, and the correct solution is presented in a more general form. The behaviour of the solution under different soil conditions is then investigated. It is found that the solution for deep soil conditions is sensitive to the soil shear stress in the top thin layer of the soil. However the solution is significantly influenced by the shear stress in the thin layer of soil near the impermeable base, for shallow and finite depth soil conditions. It is also found that a small error in the soil shear stress can lead to a large error in the accumulated pore pressure. An error analysis reveals the relationships between the accuracy of the pore pressure accumulation and the accuracy of the soil shear stress. A numerical solution to the simplified Biot consolidation equation is also developed. It is shown that the error analysis is of significant value for the numerical modelling of pore pressure buildup in marine soils. Both analytical and numerical examples are given to validate the error estimation method proposed in the present paper. Copyright © 2001 John Wiley & Sons, Ltd. [source]


A subdomain boundary element method for high-Reynolds laminar flow using stream function-vorticity formulation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2004
Matja
Abstract The paper presents a new formulation of the integral boundary element method (BEM) using subdomain technique. A continuous approximation of the function and the function derivative in the direction normal to the boundary element (further ,normal flux') is introduced for solving the general form of a parabolic diffusion-convective equation. Double nodes for normal flux approximation are used. The gradient continuity is required at the interior subdomain corners where compatibility and equilibrium interface conditions are prescribed. The obtained system matrix with more equations than unknowns is solved using the fast iterative linear least squares based solver. The robustness and stability of the developed formulation is shown on the cases of a backward-facing step flow and a square-driven cavity flow up to the Reynolds number value 50 000. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A block-implicit numerical procedure for simulation of buoyant swirling flows in a model furnace

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2003
Marcelo J. S. de Lemos
Abstract This work reports numerical results for the case of incompressible laminar heated flow with a swirl in a vertical cylindrical chamber. Computations are obtained with a point-wise block-implicit scheme. Flow governing equations are written in terms of the so-called primitive variables and are recast into a general form. The discretized momentum equations are applied to each cell face and then, together with the mass-continuity, tangential velocity and energy equations, are solved directly in each computational node. The effects of Rayleigh, Reynolds and Swirl numbers on the temperature field are discussed. Flow pattern and scalar residual history are reported. Further, it is expected that more advanced parallel computer architectures can benefit from the error smoothing operator here described. Copyright © 2003 John Wiley & Sons, Ltd. [source]


About yes/no queries against possibilistic databases

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2007
Patrick Bosc
This article is concerned with the handling of imprecise information in databases. The need for dealing with imprecise data is more and more acknowledged in order to cope with real data, even if commercial systems are most of the time unable to manage them. Here, the possibilistic setting is taken into consideration because it is less demanding than the probabilistic one. Then, any imprecise piece of information is modeled as a possibility distribution intended for constraining the more or less acceptable values. Such a possibilistic database has a natural interpretation in terms of a set of regular databases, which provides the basic gateway to interpret queries. However, if this approach is sound, it is not realistic, and it is necessary to consider restricted queries for which a calculus grounded on the possibilistic database, that is, where the operators work directly on possibilistic relations, is feasible. Extended yes/no queries are dealt with here, where their general form is: "to what extent is it possible and certain that tuple t (given) belongs to the answer to Q," where Q is an algebraic relational query. A strategy for processing such queries efficiently is proposed under some assumptions as to the operators appearing in Q. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 691,721, 2007. [source]


Preference solutions of probability decision making with rim quantifiers

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 12 2005
Xinwang Liu
This article extends the quantifier-guided aggregation method to include probabilistic information. A general framework for the preference solution of decision making under an uncertainty problem is proposed, which can include decision making under ignorance and decision making under risk methods as special cases with some specific preference parameters. Almost all the properties, especially the monotonicity property, are kept in this general form. With the generating function representation of the Regular Increasing Monotone (RIM) quantifier, some properties of the RIM quantifier are discussed. A parameterized RIM quantifier to represent the valuation preference for probabilistic decision making is proposed. Then the risk attitude representation method is integrated in this quantifier-guided probabilistic decision making model to make it a general form of decision making under uncertainty. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 1253,1271, 2005. [source]


Determining the importance weights for the design requirements in the house of quality using the fuzzy analytic network approach

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2004
Gülçin Büyüközkan
Quality function deployment (QFD) has been used to translate customer needs (CNs) and wants into technical design requirements (DRs) in order to increase customer satisfaction. QFD uses the house of quality (HOQ), which is a matrix providing a conceptual map for the design process, as a construct for understanding CNs and establishing priorities of DRs to satisfy them. This article uses the analytic network process (ANP), the general form of the analytic hierarchy process (AHP), to prioritize DRs by taking into account the degree of the interdependence between the CNs and DRs and the inner dependence among them. In addition, because human judgment on the importance of requirements is always imprecise and vague, this work concentrates on a fuzzy ANP approach in which triangular fuzzy numbers are used to improve the quality of the responsiveness to CNs and DRs. A numerical example is presented to show the proposed methodology. © 2004 Wiley Periodicals, Inc. [source]


Tikhonov regularization in standardized and general form for multivariate calibration with application towards removing unwanted spectral artifacts

JOURNAL OF CHEMOMETRICS, Issue 1-2 2006
Forrest Stout
Abstract Tikhonov regularization (TR) is an approach to form a multivariate calibration model for y,=,Xb. It includes a regulation operator matrix L that is usually set to the identity matrix I and in this situation, TR is said to operate in standard form and is the same as ridge regression (RR). Alternatively, TR can function in general form with L,,,I where L is used to remove unwanted spectral artifacts. To simplify the computations for TR in general form, a standardization process can be used on X and y to transform the problem into TR in standard form and a RR algorithm can now be used. The calculated regression vector in standardized space must be back-transformed to the general form which can now be applied to spectra that have not been standardized. The calibration model building methods of principal component regression (PCR), partial least squares (PLS) and others can also be implemented with the standardized X and y. Regardless of the calibration method, armed with y, X and L, a regression vector is sought that can correct for irrelevant spectral variation in predicting y. In this study, L is set to various derivative operators to obtain smoothed TR, PCR and PLS regression vectors in order to generate models robust to noise and/or temperature effects. Results of this smoothing process are examined for spectral data without excessive noise or other artifacts, spectral data with additional noise added and spectral data exhibiting temperature-induced peak shifts. When the noise level is small, derivative operator smoothing was found to slightly degrade the root mean square error of validation (RMSEV) as well as the prediction variance indicator represented by the regression vector 2-norm thereby deteriorating the model harmony (bias/variance tradeoff). The effective rank (ER) (parsimony) was found to decrease with smoothing and in doing so; a harmony/parsimony tradeoff is formed. For the temperature-affected data and some of the noisy data, derivative operator smoothing decreases the RMSEV, but at a cost of greater values for . The ER was found to increase and hence, the parsimony degraded. A simulated data set from a previous study that used TR in general form was reexamined. In the present study, the standardization process is used with L set to the spectral noise structure to eliminate undesirable spectral regions (wavelength selection) and TR, PCR and PLS are evaluated. There was a significant decrease in bias at a sacrifice to variance with wavelength selection and the parsimony essentially remains the same. This paper includes discussion on the utility of using TR to remove other undesired spectral patterns resulting from chemical, environmental and/or instrumental influences. The discussion also incorporates using TR as a method for calibration transfer. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Ligand effects upon deuterium exchange in arenes mediated by [Ir(PR3)2(cod)]+.BF4,

JOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 1 2004
George J. Ellames
Abstract A series of complexes of general form [Ir(PR3)2(cod)]+ has been prepared and used, without isolation, to mediate deuteration of a range of model substrates. The data suggest that, with many substrates, basicity of the phosphine ligands bound to iridium is an important factor influencing substrate selectivity and the efficiency of deuteration. In addition, the spectrum of activity of iridium complexes bearing pure donor ligands is different in many cases to that of complexes where the ligands are known to be ,-acids. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A simulation-optimization framework for research and development pipeline management

AICHE JOURNAL, Issue 10 2001
Dharmashankar Subramanian
The Research and Development Pipeline management problem has far-reaching economic implications for new-product-development-driven industries, such as pharmaceutical, biotechnology and agrochemical industries. Effective decision-making is required with respect to portfolio selection and project task scheduling in the face of significant uncertainty and an ever-constrained resource pool. The here-and-now stochastic optimization problem inherent to the management of an R&D Pipeline is described in its most general form, as well as a computing architecture, Sim-Opt, that combines mathematical programming and discrete event system simulation to assess the uncertainty and control the risk present in the pipeline. The R&D Pipeline management problem is viewed in Sim-Opt as the control problem of a performance-oriented, resource-constrained, stochastic, discrete-event, dynamic system. The concept of time lines is used to study multiple unique realizations of the controlled evolution of the discrete-event pipeline system. Four approaches using various degrees of rigor were investigated for the optimization module in Sim-Opt, and their relative performance is explored through an industrially motivated case study. Methods are presented to efficiently integrate information across the time lines from this framework. This integration of information demonstrated in a case study was used to infer a creative operational policy for the corresponding here-and-now stochastic optimization problem. [source]


Robust Automatic Bandwidth for Long Memory

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2001
Marc Henry
The choice of bandwidth, or number of harmonic frequencies, is crucial to semiparametric estimation of long memory in a covariance stationary time series as it determines the rate of convergence of the estimate, and a suitable choice can insure robustness to some non-standard error specifications, such as (possibly long-memory) conditional heteroscedasticity. This paper considers mean squared error minimizing bandwidths proposed in the literature for the local Whittle, the averaged periodogram and the log periodogram estimates of long memory. Robustness of these optimal bandwidth formulae to conditional heteroscedasticity of general form in the errors is considered. Feasible approximations to the optimal bandwidths are assessed in an extensive Monte Carlo study that provides a good basis for comparison of the above-mentioned estimates with automatic bandwidth selection. [source]


Elliptic operators with general Wentzell boundary conditions, analytic semigroups and the angle concavity theorem

MATHEMATISCHE NACHRICHTEN, Issue 4 2010
Angelo Favini
Abstract We prove a very general form of the Angle Concavity Theorem, which says that if (T (t)) defines a one parameter semigroup acting over various Lp spaces (over a fixed measure space), which is analytic in a sector of opening angle ,p, then the maximal choice for ,p is a concave function of 1 , 1/p. This and related results are applied to give improved estimates on the optimal Lp angle of ellipticity for a parabolic equation of the form ,u /,t = Au, where A is a uniformly elliptic second order partial differential operator with Wentzell or dynamic boundary conditions. Similar results are obtained for the higher order equation ,u /,t = (,1)m +lAmu, for all positive integers m. [source]


Gâteaux derivatives and their applications to approximation in Lorentz spaces ,p,w

MATHEMATISCHE NACHRICHTEN, Issue 9 2009
Maciej Ciesielski
Abstract We establish the formulas of the left- and right-hand Gâteaux derivatives in the Lorentz spaces ,p,w = {f: ,0,(f **)pw < ,}, where 1 , p < ,, w is a nonnegative locally integrable weight function and f ** is a maximal function of the decreasing rearrangement f * of a measurable function f on (0, ,), 0 < , , ,. We also find a general form of any supporting functional for each function from ,p,w, and the necessary and sufficient conditions for which a spherical element of ,p,w is a smooth point of the unit ball in ,p,w. We show that strict convexity of the Lorentz spaces ,p,w is equivalent to 1 < p < , and to the condition ,0,w = ,. Finally we apply the obtained characterizations to studies the best approximation elements for each function f , ,p,w from any convex set K , ,p,w (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


On a linear differential equation with a proportional delay

MATHEMATISCHE NACHRICHTEN, Issue 5-6 2007
ermák
Abstract This paper deals with the delay differential equation We impose some growth conditions on c, under which we are able to give a precise description of the asymptotic properties of all solutions of this equation. Although we naturally have to distinguish the cases c eventually positive and c eventually negative, we show a certain resemblance between the asymptotic formulae corresponding to both cases. Moreover, using the transformation approach we generalize these results to the equation with a general form of a delay. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


On some peculiar aspects of the constructive theory of point-free spaces

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 4 2010
Giovanni Curi
Abstract This paper presents several independence results concerning the topos-valid and the intuitionistic (generalised) predicative theory of locales. In particular, certain consequences of the consistency of a general form of Troelstra's uniformity principle with constructive set theory and type theory are examined (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Toward anisotropic mesh construction and error estimation in the finite element method

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 5 2002
Gerd Kunert
Abstract Directional, anisotropic features like layers in the solution of partial differential equations can be resolved favorably by using anisotropic finite element meshes. An adaptive algorithm for such meshes includes the ingredients Error estimation and Information extraction/Mesh refinement. Related articles on a posteriori error estimation on anisotropic meshes revealed that reliable error estimation requires an anisotropic mesh that is aligned with the anisotropic solution. To obtain anisotropic meshes the so-called Hessian strategy is used, which provides information such as the stretching direction and stretching ratio of the anisotropic elements. This article combines the analysis of anisotropic information extraction/mesh refinement and error estimation (for several estimators). It shows that the Hessian strategy leads to well-aligned anisotropic meshes and, consequently, reliable error estimation. The underlying heuristic assumptions are given in a stringent yet general form. Numerical examples strengthen the exposition. Hence the analysis provides further insight into a particular aspect of anisotropic error estimation. © 2002 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 18: 625,648, 2002; DOI 10.1002/num.10023 [source]


Model of a superconducting singular Fermi liquid with a first-order phase transition

PHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 2 2004
Ryszard Gonczarek
Abstract Model of s -wave and d -wave superconductivity in a singular Fermi liquid with a divergent scattering amplitude for particles with the same quasi-momenta and opposite spins is formulated and presented with regard to a narrow, nearly half-filled conduction band. The ground state and other eigenstates for the superconducting phase are found. Thermodynamic functions are obtained by the use of the Bogolubov method. The gap equation along with the equation for the chemical potential is derived in a general form and solved in a self-consistent manner for s -wave pairing. Above a certain temperature there are two solutions of the gap equation, however only for the greater one the superconducting phase remains stable. It is shown that the system undergoes a first-order phase transition between the superconducting and the normal phase. The critical temperature and the heat of the transition are found. The temperature dependence of the entropy and the specific heat of the system is also presented. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Modelling of reserve carbohydrate dynamics, regrowth and nodulation in a N2 -fixing tree managed by periodic prunings

PLANT CELL & ENVIRONMENT, Issue 10 2000
F. Berninger
ABSTRACT We used a modified transport resistance approach to model legume tree growth, nodulation and dynamics of reserve carbohydrates after pruning. The model distributes growth between roots and shoots applying the transport resistance approach. Within shoots, growth is divided into leaves, branches and stems applying the pipe model theory. The model also accounts for the metabolic differences of principal N sources, nitrate, ammonium and atmospheric dinitrogen, in a mechanistic way. We compared the simulation results with measured biomass dynamics of Gliricidia sepium (Jacq.) Walp. (Papilionaceae: Robinieae) under humid and subhumid tropical conditions. Comparison showed that the biomass production predicted by the model is close to measured values. Total N2 fixation is also similar to measured values. Qualitatively the model increases the proportion of N2 fixation if roots acquire less mineral N. In the present study, the general form of the model is discussed and compared with similar models. The results encourage the use of this approach for studying biomass dynamics of legume trees under the scheme of periodic prunings. Also, it shows that process-based models have potential in the simulation of trees disturbed by prunings, herbivory or similar factors. [source]


A comparative analysis of a modified picture frame test for characterization of woven fabrics

POLYMER COMPOSITES, Issue 4 2010
A.S. Milani
An experimental, finite-element analysis framework is utilized to estimate the deformation state in a modified version of the picture frame test. During the analysis, the effect of fiber misalignment and the deformation heterogeneity in the tested fabric, a 2 × 2 PP/E-Glass twill, is accounted for and a force prediction model is presented. Using an equivalent stress,strain normalization scheme, the comparison of the modified test with the conventional (original) picture frame and bias-extension tests is also made, and results reveal similarities and differences that should receive attention in the identification of constitutive models of woven fabrics using these basic tests. Ideally, the trellising behavior should not change from one test to another but results show that in the presence of fiber misalignment, the modified picture frame test yields a behavior closer to that of the bias-extension test, while the general form of the test's repeatability, measured by a signal-to-noise metric, remains similar to the original picture frame test. POLYM. COMPOS., 2010. © 2009 Society of Plastics Engineers [source]


Unusual binding interactions in PDZ domain crystal structures help explain binding mechanisms

PROTEIN SCIENCE, Issue 4 2010
Jonathan M. Elkins
Abstract PDZ domains most commonly bind the C-terminus of their protein targets. Typically the C-terminal four residues of the protein target are considered as the binding motif, particularly the C-terminal residue (P0) and third-last residue (P-2) that form the major contacts with the PDZ domain's "binding groove". We solved crystal structures of seven human PDZ domains, including five of the seven PDLIM family members. The structures of GRASP, PDLIM2, PDLIM5, and PDLIM7 show a binding mode with only the C-terminal P0 residue bound in the binding groove. Importantly, in some cases, the P-2 residue formed interactions outside of the binding groove, providing insight into the influence of residues remote from the binding groove on selectivity. In the GRASP structure, we observed both canonical and noncanonical binding in the two molecules present in the asymmetric unit making a direct comparison of these binding modes possible. In addition, structures of the PDZ domains from PDLIM1 and PDLIM4 also presented here allow comparison with canonical binding for the PDLIM PDZ domain family. Although influenced by crystal packing arrangements, the structures nevertheless show that changes in the positions of PDZ domain side-chains and the ,B helix allow noncanonical binding interactions. These interactions may be indicative of intermediate states between unbound and fully bound PDZ domain and target protein. The noncanonical "perpendicular" binding observed potentially represents the general form of a kinetic intermediate. Comparison with canonical binding suggests that the rearrangement during binding involves both the PDZ domain and its ligand. [source]


A General Empirical Law of Public Budgets: A Comparative Analysis

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 4 2009
Bryan D. Jones
We examine regularities and differences in public budgeting in comparative perspective. Budgets quantify collective political decisions made in response to incoming information, the preferences of decision makers, and the institutions that structure how decisions are made. We first establish that the distribution of budget changes in many Western democracies follows a non-Gaussian distribution, the power function. This implies that budgets are highly incremental, yet occasionally are punctuated by large changes. This pattern holds regardless of the type of political system,parliamentary or presidential,and for level of government. By studying the power function's exponents we find systematic differences for budgetary increases versus decreases (the former are more punctuated) in most systems, and for levels of government (local governments are less punctuated). Finally, we show that differences among countries in the coefficients of the general budget law correspond to differences in formal institutional structures. While the general form of the law is probably dictated by the fundamental operations of human and organizational information processing, differences in the magnitudes of the law's basic parameters are country- and institution-specific. [source]


A data assimilation method for log-normally distributed observational errors

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 621 2006
S. J. Fletcher
Abstract In this paper we change the standard assumption made in the Bayesian framework of variational data assimilation to allow for observational errors that are log-normally distributed. We address the question of which statistic best describes the distribution for the univariate and multivariate cases to justify our choice of the mode. From this choice we derive the associated cost function, Jacobian and Hessian with a normal background. We also find the solution to the Jacobian equal to zero in both model and observational space. Given the Hessian that we derive, we define a preconditioner to aid in the minimization of the cost function. We extend this to define a general form for the preconditioner, given a certain type of cost function. Copyright © 2006 Royal Meteorological Society [source]


Diagnostic and prognostic equations for the depth of the stably stratified Ekman boundary layer

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 579 2002
Sergej Zilitinkevich
Abstract Refined diagnostic and prognostic equations for the depth of the stably stratified barotropic Ekman boundary later (SBL) are derived employing a recently developed non-local formulation for the eddy viscosity. In well-studied cases of the thoroughly neutral SBL, the nocturnal atmospheric SBL and the oceanic SBL dominantly affected by the static stability in the thermocline, the proposed diagnostic equation reduces to the Rossby,Montgomery, Zilitinkevich and Pollard,Rhines,Thompson equations, respectively. In its general form it is applicable to a range of regimes including long-lived atmospheric SBLs affected by the near-surface buoyancy flux and the static stability in the free atmosphere. Both diagnostic and prognostic SBL depth equations are validated against recent data from atmospheric measurements. Copyright © 2002 Royal Meteorological Society. [source]