Accurate Representation (accurate + representation)

Distribution by Scientific Domains


Selected Abstracts


An attenuation model for distant earthquakes

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 2 2004
Adrian Chandler
Abstract Large magnitude earthquakes generated at source,site distances exceeding 100km are typified by low-frequency (long-period) seismic waves. Such induced ground shaking can be disproportionately destructive due to its high displacement, and possibly high velocity, shaking characteristics. Distant earthquakes represent a potentially significant safety hazard in certain low and moderate seismic regions where seismic activity is governed by major distant sources as opposed to nearby (regional) background sources. Examples are parts of the Indian sub-continent, Eastern China and Indo-China. The majority of ground motion attenuation relationships currently available for applications in active seismic regions may not be suitable for handling long-distance attenuation, since the significance of distant earthquakes is mainly confined to certain low to moderate seismicity regions. Thus, the effects of distant earthquakes are often not accurately represented by conventional empirical models which were typically developed from curve-fitting earthquake strong-motion data from active seismic regions. Numerous well-known existing attenuation relationships are evaluated in this paper, to highlight their limitations in long-distance applications. In contrast, basic seismological parameters such as the Quality factor (Q -factor) could provide a far more accurate representation for the distant attenuation behaviour of a region, but such information is seldom used by engineers in any direct manner. The aim of this paper is to develop a set of relationships that provide a convenient link between the seismological Q -factor (amongst other factors) and response spectrum attenuation. The use of Q as an input parameter to the proposed model enables valuable local seismological information to be incorporated directly into response spectrum predictions. The application of this new modelling approach is demonstrated by examples based on the Chi-Chi earthquake (Taiwan and South China), Gujarat earthquake (Northwest India), Nisqually earthquake (region surrounding Seattle) and Sumatran-fault earthquake (recorded in Singapore). Field recordings have been obtained from these events for comparison with the proposed model. The accuracy of the stochastic simulations and the regression analysis have been confirmed by comparisons between the model calculations and the actual field observations. It is emphasized that obtaining representative estimates for Q for input into the model is equally important.Thus, this paper forms part of the long-term objective of the authors to develop more effective communications across the engineering and seismological disciplines. Copyright © 2003 John Wiley & Sons, Ltd. [source]


From Representation to Emergence: Complexity's challenge to the epistemology of schooling

EDUCATIONAL PHILOSOPHY AND THEORY, Issue 1 2008
Deborah Osberg
Abstract In modern, Western societies the purpose of schooling is to ensure that school-goers acquire knowledge of pre-existing practices, events, entities and so on. The knowledge that is learned is then tested to see if the learner has acquired a correct or adequate understanding of it. For this reason, it can be argued that schooling is organised around a representational epistemology: one which holds that knowledge is an accurate representation of something that is separate from knowledge itself. Since the object of knowledge is assumed to exist separately from the knowledge itself, this epistemology can also be considered ,spatial.' In this paper we show how ideas from complexity have challenged the ,spatial epistemology' of representation and we explore possibilities for an alternative ,temporal' understanding of knowledge in its relationship to reality. In addition to complexity, our alternative takes its inspiration from Deweyan ,transactional realism' and deconstruction. We suggest that ,knowledge' and ,reality' should not be understood as separate systems which somehow have to be brought into alignment with each other, but that they are part of the same emerging complex system which is never fully ,present' in any (discrete) moment in time. This not only introduces the notion of time into our understanding of the relationship between knowledge and reality, but also points to the importance of acknowledging the role of the ,unrepresentable' or ,incalculable'. With this understanding knowledge reaches us not as something we receive but as a response, which brings forth new worlds because it necessarily adds something (which was not present anywhere before it appeared) to what came before. This understanding of knowledge suggests that the acquisition of curricular content should not be considered an end in itself. Rather, curricular content should be used to bring forth that which is incalculable from the perspective of the present. The epistemology of emergence therefore calls for a switch in focus for curricular thinking, away from questions about presentation and representation and towards questions about engagement and response. [source]


Geobiological analysis using whole genome-based tree building applied to the Bacteria, Archaea, and Eukarya

GEOBIOLOGY, Issue 1 2003
Christopher H. House
ABSTRACT We constructed genomic trees based on the presence and absence of families of protein-encoding genes observed in 55 prokaryotic and five eukaryotic genomes. There are features of the genomic trees that are not congruent with typical rRNA phylogenetic trees. In the bacteria, for example, Deinococcus radiodurans associates with the Gram-positive bacteria, a result that is also seen in some other phylogenetic studies using whole genome data. In the Archaea, the methanogens plus Archaeoglobus form a united clade and the Euryarchaeota are divided with the two Thermoplasma genomes and Halobacterium sp. falling below the Crenarchaeota. While the former appears to be an accurate representation of methanogen-relatedness, the misplacement of Halobacterium may be an artefact of parsimony. These results imply the last common ancestor of the Archaea was not a methanogen, leaving sulphur reduction as the most geochemically plausible metabolism for the base of the archaeal crown group. It also suggests that methanogens were not a component of the Earth's earliest biosphere and that their origin occurred sometime during the Archean. In the Eukarya, the parsimony analysis of five Eukaryotes using the Crenarchaeota as an outgroup seems to counter the Ecdysozoa hypothesis, placing Caenorhabditis elegans (Nematoda) below the common ancestor of Drosophila melanogaster (Arthropoda) and Homo sapiens (Chordata) even when efforts are made to counter the possible effects of a faster rate of sequence evolution for the C. elegans genome. Further analysis, however, suggests that the gene loss of ,animal' genes is highest in C. elegans and is obscuring the relationships of these organisms. [source]


Basis functions for the consistent and accurate representation of surface mass loading

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
Peter J. Clarke
SUMMARY Inversion of geodetic site displacement data to infer surface mass loads has previously been demonstrated using a spherical harmonic representation of the load. This method suffers from the continent-rich, ocean-poor distribution of geodetic data, coupled with the predominance of the continental load (water storage and atmospheric pressure) compared with the ocean bottom pressure (including the inverse barometer response). Finer-scale inversion becomes unstable due to the rapidly increasing number of parameters which are poorly constrained by the data geometry. Several approaches have previously been tried to mitigate this, including the adoption of constraints over the oceanic domain derived from ocean circulation models, the use of smoothness constraints for the oceanic load, and the incorporation of GRACE gravity field data. However, these methods do not provide appropriate treatment of mass conservation and of the ocean's equilibrium-tide response to the total gravitational field. Instead, we propose a modified set of basis functions as an alternative to standard spherical harmonics. Our basis functions allow variability of the load over continental regions, but impose global mass conservation and equilibrium tidal behaviour of the oceans. We test our basis functions first for the efficiency of fitting to realistic modelled surface loads, and then for accuracy of the estimates of the inferred load compared with the known model load, using synthetic geodetic displacements with real GPS network geometry. Compared to standard spherical harmonics, our basis functions yield a better fit to the model loads over the period 1997,2005, for an equivalent number of parameters, and provide a more accurate and stable fit using the synthetic geodetic displacements. In particular, recovery of the low-degree coefficients is greatly improved. Using a nine-parameter fit we are able to model 58 per cent of the variance in the synthetic degree-1 zonal coefficient time-series, 38,41 per cent of the degree-1 non-zonal coefficients, and 80 per cent of the degree-2 zonal coefficient. An equivalent spherical harmonic estimate truncated at degree 2 is able to model the degree-1 zonal coefficient similarly (56 per cent of variance), but only models 59 per cent of the degree-2 zonal coefficient variance and is unable to model the degree-1 non-zonal coefficients. [source]


Experimental evidence for the attenuating effect of SOM protection on temperature sensitivity of SOM decomposition

GLOBAL CHANGE BIOLOGY, Issue 10 2010
JEROEN GILLABEL
Abstract The ability to predict C cycle responses to temperature changes depends on the accurate representation of temperature sensitivity (Q10) of soil organic matter (SOM) decomposition in C models for different C pools and soil depths. Theoretically, Q10 of SOM decomposition is determined by SOM quality and availability (referred to here as SOM protection). Here, we focus on the role of SOM protection in attenuating the intrinsic, SOM quality dependent Q10. To assess the separate effects of SOM quality and protection, we incubated topsoil and subsoil samples characterized by differences in SOM protection under optimum moisture conditions at 25 °C and 35 °C. Although lower SOM quality in the subsoil should lead to a higher Q10 according to kinetic theory, we observed a much lower overall temperature response in subsoil compared with the topsoil. Q10 values determined for respired SOM fractions of decreasing lability within the topsoil increased from 1.9 for the most labile to 3.8 for the least labile respired SOM, whereas corresponding Q10 values for the subsoil did not show this trend (Q10 between 1.4 and 0.9). These results indicate the existence of a limiting factor that attenuates the intrinsic effect of SOM quality on Q10 in the subsoil. A parallel incubation experiment of 13C-labeled plant material added to top- and subsoil showed that decomposition of an unprotected C substrate of equal quality responds similarly to temperature changes in top- and subsoil. This further confirms that the attenuating effect on Q10 in the subsoil originates from SOM protection rather than from microbial properties or other nutrient limitations. In conclusion, we found experimental evidence that SOM protection can attenuate the intrinsic Q10 of SOM decomposition. [source]


Stream network modelling using lidar and photogrammetric digital elevation models: a comparison and field verification

HYDROLOGICAL PROCESSES, Issue 12 2008
Paul N. C. Murphy
Abstract A conventional, photogrammetrically derived digital elevation model (DEM; 10 m resolution) and a light detection and ranging (lidar)-derived DEM (1 m resolution) were used to model the stream network of a 193 ha watershed in the Swan Hills of Alberta, Canada. Stream networks, modelled using both hydrologically corrected and uncorrected versions of the DEMs and derived from aerial photographs, were compared. The actual network, mapped in the field, was used as verification. The lidar DEM-derived network was the most accurate representation of the field-mapped network, being more accurate even than the photo-derived network. This was likely due to the greater initial point density, accuracy and resolution of the lidar DEM compared with the conventional DEM. Lidar DEMs have great potential for application in land-use planning and management and hydrologic modelling. The network derived from the hydrologically corrected conventional DEM was more accurate than that derived from the uncorrected one, but this was not the case with the lidar DEM. Copyright © 2007 John Wiley & Sons, Ltd. [source]


The moment-of-fluid method in action

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2009
Hyung Taek Ahn
Abstract The moment-of-fluid (MOF) method is a new volume-tracking method that accurately treats evolving material interfaces. The MOF method uses moment data, namely the material volume fraction, as well as the centroid, for a more accurate representation of the material configuration, interfaces and concomitant volume advection. In contrast, the volume-of-fluid method uses only volume fraction data for interface reconstruction and advection. Based on the moment data for each material, the material interfaces are reconstructed with second-order spatial accuracy in a strictly conservative manner. The MOF method is coupled with a stabilized finite element incompressible Navier,Stokes solver for two materials. The effectiveness of the MOF method is demonstrated with a free-surface dam-break and a two-material Rayleigh,Taylor problem. Copyright © 2008 John Wiley & Sons, Ltd. [source]


On the computation of steady-state compressible flows using a discontinuous Galerkin method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2008
Hong Luo
Abstract Computation of compressible steady-state flows using a high-order discontinuous Galerkin finite element method is presented in this paper. An accurate representation of the boundary normals based on the definition of the geometries is used for imposing solid wall boundary conditions for curved geometries. Particular attention is given to the impact and importance of slope limiters on the solution accuracy for flows with strong discontinuities. A physics-based shock detector is introduced to effectively make a distinction between a smooth extremum and a shock wave. A recently developed, fast, low-storage p -multigrid method is used for solving the governing compressible Euler equations to obtain steady-state solutions. The method is applied to compute a variety of compressible flow problems on unstructured grids. Numerical experiments for a wide range of flow conditions in both 2D and 3D configurations are presented to demonstrate the accuracy of the developed discontinuous Galerkin method for computing compressible steady-state flows. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Re-estimating the prevalence of psychiatric disorders in a nationally representative sample of persons receiving care for HIV: results from the HIV cost and services utilization study

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 2 2002
PhD Maria Orlando
Abstract The objective of this study was to obtain accurate estimates of the prevalence of psychiatric disorder in the population represented by the HIV Costs and Services Utilization Study cohort. We constructed logistic regression models to predict DSM-IV diagnoses of depression, generalized anxiety disorder, panic, and dysthymia among a subsample of the HCSUS cohort who in separate interviews completed the CIDI-SF and the full CIDI diagnostic interview. Diagnoses were predicted using responses to the CIDI-SF as well as other variables contained in the baseline and first follow-up interviews. Resulting regression equations were applied to the entire baseline and first follow-up samples to obtain new estimates of the prevalence of disorder. Compared to estimates based on the CIDI-SF alone, estimates obtained from this procedure provide a more accurate representation of the prevalence of the presence of any one of these four psychiatric disorders in this population, yielding more correct classifications and a lower false-positive rate. Prevalence rates reported in this study are as much as 16% lower than rates estimated using the CIDI-SF alone, but are still considerably higher than estimates for the general community population. Copyright © 2002 Whurr Publishers Ltd. [source]


A new approach for numerical simulation of quantum transport in double-gate SOI

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 6 2007
Tarek M. Abdolkader
Abstract Numerical simulation of nanoscale double-gate SOI (Silicon-on-Insulator) greatly depends on the accurate representation of quantum mechanical effects. These effects include, mainly, the quantum confinement of carriers by gate-oxides in the direction normal to the interfaces, and the quantum transport of carriers along the channel. In a previous work, the use of transfer matrix method (TMM) was proposed for the simulation of the first effect. In this work, TMM is proposed to be used for the solution of Schrodinger equation with open boundary conditions to simulate the second quantum-mechanical effect. Transport properties such as transmission probability, carrier concentration, and I,V characteristics resulting from quantum transport simulation using TMM are compared with that using the traditional tight-binding model (TBM). Comparison showed that, when the same mesh size is used in both methods, TMM gives more accurate results than TBM. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Convergence radii of the polarization expansion of intermolecular potentials

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2009
William H. Adams
Abstract A new method is presented to evaluate convergence radii of the polarization expansion of interaction energies for pairs of atoms or molecules. The method is based on an analysis of the variation of the perturbed state vector as a function of the coupling constant , and does not require a calculation of perturbation corrections to high order. The convergence radii at infinite interatomic/intermolecular distances R, as well as a remarkably accurate representation of the R dependence of the convergence radii are obtained from simple calculations involving only monomer wave functions. For the interaction of the lithium and hydrogen atoms, the obtained convergence radii agree well with those obtained previously from the large-order calculations of Patkowski et al. (Patkowski et al., J Chem Phys, 2002, 117, 5124), but are expected to be considerably more accurate. Rigorous upper bounds and reasonable approximations to the convergence radii at R = , are obtained for the pairs of lithium, beryllium, boron, neon, and sodium atoms, as well as for the dimer consisting of two LiH molecules. For all the systems studied, the convergence radii are significantly smaller than the unity and rapidly decrease with the increase of the nuclear charge. It is hoped that the results of this investigation will help to analyze and eventually to compute the convergence radii of the symmetry-adapted perturbation theories which utilize the same partitioning of the Hamiltonian as the polarization expansion. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source]


Effects of long-term implanted data loggers on macaroni penguins Eudyptes chrysolophus

JOURNAL OF AVIAN BIOLOGY, Issue 4 2004
Jonathan A. Green
We tested the hypothesis that implanted data loggers have no effect on the survival, breeding success and behaviour of macaroni penguins Eudyptes chrysolophus. Seventy penguins were implanted with heart rate data loggers (DLs) for periods of up to 15 months. When compared to control groups, implanted penguins showed no significant difference in over-wintering survival rates, arrival date and mass at the beginning of the breeding season. Later in the breeding season, implanted penguins showed no significant difference in the duration of their incubation foraging trip, breeding success, fledging mass of their chicks, date of arrival to moult and mass at the beginning of the moult fast. We conclude that implanted devices had no effects on the behaviour, breeding success and survival of this species. We contrast these results to those from studies using externally attached devices, which commonly affect the behaviour of penguins. We suggest that implanted devices should be considered as an alternative to externally attached devices in order to obtain the most accurate representation of the free-ranging behaviour, ecology and physiology of penguins. [source]


Advancing beyond charge analysis using the electronic localization function: Chemically intuitive distribution of electrostatic moments

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 9 2008
Julien Pilmé
Abstract We propose here an evaluation of chemically intuitive distributed electrostatic moments using the topological analysis of the electron localization function (ELF). As this partition of the total charge density provides an accurate representation of the molecular dipole, the distributed electrostatic moments based on the ELF partition (DEMEP) allows computing of local moments located at non atomic centers such as lone pairs, , bonds and , systems. As the local dipole contribution can be decomposed in polarization and charge transfer components, our results indicate that local dipolar polarization of the lone pairs and chemical reactivity are closely related whereas the charge transfer contribution is the key factor driving the local bond dipole. Results on relevant molecules show that local dipole contributions can be used to rationalize inductive polarization effects in alcohols derivatives and typical hydrogen bond interactions. Moreover, bond quadrupole polarization moments being related to a , character enable to discuss bond multiplicities, and to sort families of molecules according to their bond order. That way, the nature of the CO bond has been revisited for several typical systems by means of the DEMEP analysis which appears also helpful to discuss aromaticity. Special attention has been given to the carbon monoxide molecule, to the CuCO complex and to a weak intramolecular N|---CO interaction involved in several biological systems. In this latter case, it is confirmed that the bond formation is mainly linked to the CO bond polarization. Transferability tests show that the approach is suitable for the design of advanced force fields. © 2008 Wiley Periodicals, Inc. J Comput Chem 2008 [source]


Effects of mesoscale environmental heterogeneity and dispersal limitation on floristic variation in rain forest ferns

JOURNAL OF ECOLOGY, Issue 1 2006
MIRKKA M. JONES
Summary 1Field studies to evaluate the roles of environmental variation and random dispersal in explaining variation in the floristic composition of rain forest plants at landscape to regional scales have yet to reach a consensus. Moreover, only one study has focused on scales below 10 km2, where the effects of dispersal limitation are expected to be easiest to observe. 2In the present study, we estimate the importance of differences in some key environmental variables (describing canopy openness, soils and topography) relative to the geographical distances between sample plots as determinants of differences in pteridophyte (ferns and fern allies) species composition between plots within a c. 5.7 km2 lowland rain forest site in Costa Rica. 3To assess the relative importance of environmental vs. geographical distances in relation to the length of environmental gradient covered, we compared the results obtained over the full range of soil types, including swamps, with those for upland soils alone. 4Environmental variability was found to be a far stronger predictor of changes in floristic differences than the geographical distance between sample plots. In particular, differences in soil nutrient content, drainage and canopy openness correlated with floristic differences. 5The decline in mean floristic similarity with increasing geographical distance was stronger than proposed by the random dispersal model over short distances (up to c. 100 m), which is probably attributable to both dispersal limitation and environmental changes. The scatter around the mean was large at all distances. 6Our initial expectation was that the effects of dispersal limitation (represented by geographical distance) on observed patterns of floristic similarity would be stronger, and those of environmental differences weaker, than at broader spatial scales. Instead, these results suggest that the niche assembly view is a more accurate representation of pteridophyte communities at local to mesoscales than the dispersal assembly view. [source]


Refined electrolyte-NRTL model: Activity coefficient expressions for application to multi-electrolyte systems,

AICHE JOURNAL, Issue 6 2008
G. M. Bollas
Abstract The influence of simplifying assumptions of the electrolyte-nonrandom two-liquid (NRTL) model in the derivation of activity coefficient expressions as applied to multi-electrolyte systems is critically examined. A rigorous and thermodynamically consistent formulation for the activity coefficients is developed, in which the simplifying assumption of holding ionic-charge fraction quantities constant in the derivation of activity coefficient expressions is removed. The refined activity coefficient formulation possesses stronger theoretical properties and practical superiority that is demonstrated through a case study representing the thermodynamic properties and speciation of dilute to concentrated aqueous sulfuric acid solutions at ambient conditions. In this case study phenomena, such as hydration, ion pairing, and partial dissociation are all taken into account. The overall result of this study is a consistent, analytically derived, short-range interaction contribution formulation for the electrolyte-NRTL activity coefficients and a very accurate representation of aqueous sulfuric acid solutions at ambient conditions at concentrations up to 50 molal. © 2008 American Institute of Chemical Engineers AIChE J, 2008 [source]


PSRK method for gas hydrate equilibria: I. Simple and mixed hydrates

AICHE JOURNAL, Issue 1 2004
Ji-Ho Yoon
Abstract A thermodynamic model using the predictive Soave-Redlich-Kwong (PSRK) group contribution method to calculate the fugacities of all components in the vapor and liquid phases in equilibrium with the coexisting hydrate phase is proposed. Since the PSRK method together with the UNIFAC model takes the gas,gas interaction in the vapor and liquid phases into account, the phase equilibria of mixed gas hydrates can be successfully reproduced. This approach greatly improves upon the accuracy of the modified Huron-Vidal second-order (MHV2) model, especially for three-guest hydrate systems. Based on experimentally determined X-ray data, an accurate representation for the molar volume of the structure I (sI) hydrate is provided and used for predicting the equilibrium dissociation of methane hydrate at high pressures. Using this correlation, it is possible to reduce noticeable errors in dissociation predictions of high-pressure hydrate formers. Complete phase behavior, including a new quadruple point, which is predicted to be 272.6 K and 7.55 MPa, for cyclopropane hydrate, is presented by the proposed model calculation. © 2004 American Institute of Chemical Engineers AIChE J, 50: 203,214, 2004 [source]


Chip-mass spectrometry for glycomic studies

MASS SPECTROMETRY REVIEWS, Issue 2 2009
Laura Bindila
Abstract The introduction of micro- and nanochip front end technologies for electrospray mass spectrometry addressed a major challenge in carbohydrate analysis: high sensitivity structural determination and heterogeneity assessment in high dynamic range mixtures of biological origin. Chip-enhanced electrospray ionization was demonstrated to provide reproducible performance irrespective of the type of carbohydrate, while the amenability of chip systems for coupling with different mass spectrometers greatly advance the chip/MS technique as a versatile key tool in glycomic studies. A more accurate representation of the glycan repertoire to include novel biologically-relevant information was achieved in different biological sources, asserting this technique as a valuable tool in glycan biomarker discovery and monitoring. Additionally, the integration of various analytical functions onto chip devices and direct hyphenation to MS proved its potential for glycan analysis during the recent years, whereby a new analytical tool is on the verge of maturation: lab-on-chip MS glycomics. The achievements until early beginning of 2007 on the implementation of chip- and functional integrated chip/MS in systems glycobiology studies are reviewed here. © 2009 Wiley Periodicals, Inc., Mass Spec Rev 28:223,253, 2009 [source]


Mesoscale simulations of atmospheric flow and tracer transport in Phoenix, Arizona

METEOROLOGICAL APPLICATIONS, Issue 3 2006
Ge Wang
Abstract Large urban centres located within confining rugged or complex terrain can frequently experience episodes of high concentrations of lower atmospheric pollution. Metropolitan Phoenix, Arizona (United States), is a good example, as the general population is occasionally subjected to high levels of lower atmospheric ozone, carbon monoxide and suspended particulate matter. As a result of dramatic but continuous increase in population, the accompanying environmental stresses and the local atmospheric circulation that dominates the background flow, an accurate simulation of the mesoscale pollutant transport across Phoenix and similar urban areas is becoming increasingly important. This is particularly the case in an airshed, such as that of Phoenix, where the local atmospheric circulation is complicated by the complex terrain of the area. Within the study presented here, a three-dimensional time-dependent mesoscale meteorological model (HOTMAC) is employed for simulation of lower-atmospheric flow in Phoenix, for both winter and summer case-study periods in 1998. The specific purpose of the work is to test the model's ability to replicate the atmospheric flow based on the actual observations of the lower-atmospheric wind profile and known physical principles. While a reasonable general agreement is found between the model-produced flow and the observed one, the simulation of near-surface wind direction produces a much less accurate representation of actual conditions, as does the simulation of wind speed over 1,000 metres above the surface. Using the wind and turbulence output from the mesoscale model, likely particle plume trajectories are simulated for the case-study periods using a puff dispersion model (RAPTAD). Overall, the results provide encouragement for the efforts towards accurately simulating the mesoscale transport of lower-atmospheric pollutants in environments of complex terrain. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Characterization of monolithic spiral inductor using neural networks

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 4 2002
A. Ouchar
Abstract The characterization of a monolithic spiral inductor (MSI) by using a multilayer neural network approach is presented in this Letter. The inductance, physical, and geometrical parameters are extracted in order to perform a full characterization of MSI. A three-layer neural network was used for accurate representation. The results obtained by using neural networks are compared with measured S parameters of typical MSI. © 2002 Wiley Periodicals, Inc. Microwave Opt Technol Lett 34: 299,302, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.10442 [source]


Direct optimization of dynamic systems described by differential-algebraic equations

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 6 2008
Brian C. Fabien
Abstract This paper presents a method for the optimization of dynamic systems described by index-1 differential-algebraic equations (DAE). The class of problems addressed include optimal control problems and parameter identification problems. Here, the controls are parameterized using piecewise constant inputs on a grid in the time interval of interest. In addition, the DAE are approximated using a Rosenbrock,Wanner (ROW) method. In this way the infinite-dimensional optimal control problem is transformed into a finite-dimensional nonlinear programming problem (NLP). The NLP is solved using a sequential quadratic programming (QP) technique that minimizes the L, exact penalty function, using only strictly convex QP subproblems. This paper shows that the ROW method discretization of the DAE leads to (i) a relatively small NLP problem and (ii) an efficient technique for evaluating the function, constraints and gradients associated with the NLP problem. This paper also investigates a state mesh refinement technique that ensures a sufficiently accurate representation of the optimal state trajectory. Two nontrivial examples are used to illustrate the effectiveness of the proposed method. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Common Fluorescent Sunlamps are an Inappropriate Substitute for Sunlight ,

PHOTOCHEMISTRY & PHOTOBIOLOGY, Issue 3 2000
Douglas B. Brown
ABSTRACT Fluorescent sunlamps are commonly employed as convenient sources in photobiology experiments. The ability of Kodacel to filter photobiologically irrelevant UVC wavelengths has been described. Yet there still remains a major unaddressed issue,the over representation of UVB in the output. The shortest terrestrial solar wavelengths reaching the surface are ,295 nm with the 295,320 nm range comprising ,4% of the solar UV irradiance. In Kodacel-filtered sunlamps, 47% of the UV output falls in this range. Consequently, in studies designed to understand skin photobiology after solar exposure, the use of these unfiltered sunlamps may result in misleading, or even incorrect conclusions. To demonstrate the importance of using an accurate representation of the UV portion of sunlight, the ability of different ultraviolet radiation (UVR) sources to induce the expression of a reporter gene was assayed. Unfiltered fluorescent sunlamps (FS lamps) induce optimal chloramphenicol acetyltransferase (CAT) activity at apparently low doses (10,20 J/cm2). Filtering the FS lamps with Kodacel raised the delivered dose for optimal CAT activity to 50,60 mJ/cm2. With the more solar-like UVA-340 lamps somewhat lower levels of CAT activities were induced even though the apparent delivered doses were significantly greater than for either the FS or Kodacel-filtered sunlamp (KFS lamps). When DNA from parallel-treated cells was analyzed for photoproduct formation by a radioimmuneassay, it was shown that the induction of CAT activity correlated with the level of induced photoproduct formation regardless of the source employed. [source]


Dynamic dam,reservoir,interaction , treatment of radiation damping by the "Mixed,Variables,Technique"

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2003
C. Trinks Dipl.-Ing.
In this paper, a method for the consistent description of a fluid channel of infinite extent in a fully coupled time,domain dam,reservoir interaction analysis is proposed. The method is based on an analytical solution with respect to the unbounded dimensions of the reservoir. Thus, an accurate representation of radiation damping is achieved. [source]


Evaluation of a large-eddy model simulation of a mixed-phase altocumulus cloud using microwave radiometer, lidar and Doppler radar data

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 618 2006
J. H. Marsham
Abstract Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s,1, the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed. Copyright © 2006 Royal Meteorological Society [source]


On the use of the super compact scheme for spatial differencing in numerical models of the atmosphere

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 609 2005
V. Esfahanian
Abstract The ,Super Compact Finite-Difference Method' (SCFDM) is applied to spatial differencing of some prototype linear and nonlinear geophysical fluid dynamics problems. An alternative form of the SCFDM relations for spatial derivatives is derived. The sixth-order SCFDM is compared in detail with the conventional fourth-order compact and the second-order centred differencing. For the frequency of linear inertia-gravity waves on different numerical grids (Arakawa's A,E and Randall's Z) related to the Rossby adjustment process, the sixth-order SCFDM shows a substantial improvement on the conventional methods. For the Jacobians involved in vorticity advection by non-divergent flow and in the Bolin,Charney balance equation, a general framework, valid for every finite-difference method, is derived to present the discrete forms of the Jacobians. It is found that the sixth-order SCFDM provides a noticeably more accurate representation of the wave-number distribution of the Jacobians, when compared with the conventional methods. The problem of reconstructing the stream-function field from the vorticity field on a sphere is also considered. For the Rossby,Haurwitz wave, the computation of a normalized global error at different horizontal resolutions in longitude and latitude directions shows that the sixth-order SCFDM can markedly improve on the fourth-order compact. The sixth-order SCFDM is thus proposed as a viable method to improve the accuracy of finite-difference models of the atmosphere. Copyright © 2005 Royal Meteorological Society. [source]


On the representation of gravity waves in numerical models of the shallow-water equations

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 563 2000
A. R. Mohebalhojeh
Abstract Gravity waves, or imbalanced motions, that develop during the evolution of vortical flows in numerical models of the shallow water (SW) equations are examined in detail. The focus here is on nearly-balanced flows, with small but non-zero gravity-wave activity. For properly initialized flows, it is reasonable to expect small GW activity when Froude numbers Fr < 1 and Rossby numbers Ro , 1. The guiding principle in the present study is that an accurate representation of potential vorticity (PV) is the pre-requisite to a fair assessment of the generation of gravity waves. The contour-advective semi-Lagrangian (CASL) algorithm for the SW equations is applied, as it shows a remarkable improvement in the simulation of PV. However, it is shown that the standard CASL algorithm for SW leads to a noticeable numerical generation of gravity waves. The false generation of gravity waves can equivalently be thought of as the false, or numerical, breakdown of balance. In order to understand the maintenance of balance in the SW equations, a hierarchy of CASL algorithms is introduced. The main idea behind the new hierarchy is to implement PV inversion partially, balancing algorithms directly within the SW algorithm, while still permitting imbalanced motions. The results of the first three members of the hierarchy, CA0 (standard CASL), CA1, and CA2, are described and are compared with the results of two other SW algorithms, a pseudo-spectral and a semi-Lagrangian one. The main body of results is obtained for a highly ageostrophic regime of flow, with|Ro|max , 1 and Frmax , 0.5, where sub-index 'max' denotes maximum over the domain. Other flow regimes in the relevant parts of the Ro-Fr parameter space are also explored. It is found that, for a given resolution and Froude number, there is an optimal CASL algorithm, i.e. one which gives rise to the least numerical generation of gravity waves. [source]


What can we learn by computing 13C, chemical shifts for X-ray protein models?

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 7 2009
Yelena A. Arnautova
The room-temperature X-ray structures of ubiquitin (PDB code 1ubq) and of the RNA-binding domain of nonstructural protein 1 of influenza A virus (PDB code 1ail) solved at 1.8 and 1.9,Å resolution, respectively, were used to investigate whether a set of conformations rather than a single X-ray structure provides better agreement with both the X-ray data and the observed 13C, chemical shifts in solution. For this purpose, a set of new conformations for each of these proteins was generated by fitting them to the experimental X-ray data deposited in the PDB. For each of the generated structures, which show R and Rfree factors similar to those of the deposited X-ray structure, the 13C, chemical shifts of all residues in the sequence were computed at the DFT level of theory. The sets of conformations were then evaluated by their ability to reproduce the observed 13C, chemical shifts by using the conformational average root-mean-square-deviation (ca-r.m.s.d.). For ubiquitin, the computed set of conformations is a better representation of the observed 13C, chemical shifts in terms of the ca-r.m.s.d. than a single X-ray-derived structure. However, for the RNA-binding domain of nonstructural protein 1 of influenza A virus, consideration of an ensemble of conformations does not improve the agreement with the observed 13C, chemical shifts. Whether an ensemble of conformations rather than any single structure is a more accurate representation of a protein structure in the crystal as well as of the observed 13C, chemical shifts is determined by the dispersion of coordinates, in terms of the all-atom r.m.s.d. among the generated models; these generated models satisfy the experimental X-ray data with accuracy as good as the PDB structure. Therefore, generation of an ensemble is a necessary step to determine whether or not a single structure is sufficient for an accurate representation of both experimental X-ray data and observed 13C, chemical shifts in solution. [source]


Fauna habitat modelling and mapping: A review and case study in the Lower Hunter Central Coast region of NSW

AUSTRAL ECOLOGY, Issue 7 2005
BRENDAN A. WINTLE
Abstract Habitat models are now broadly used in conservation planning on public lands. If implemented correctly, habitat modelling is a transparent and repeatable technique for describing and mapping biodiversity values, and its application in peri-urban and agricultural landscape planning is likely to expand rapidly. Conservation planning in such landscapes must be robust to the scrutiny that arises when biodiversity constraints are placed on developers and private landholders. A standardized modelling and model evaluation method based on widely accepted techniques will improve the robustness of conservation plans. We review current habitat modelling and model evaluation methods and provide a habitat modelling case study in the New South Wales central coast region that we hope will serve as a methodological template for conservation planners. We make recommendations on modelling methods that are appropriate when presence-absence and presence-only survey data are available and provide methodological details and a website with data and training material for modellers. Our aim is to provide practical guidelines that preserve methodological rigour and result in defendable habitat models and maps. The case study was undertaken in a rapidly developing area with substantial biodiversity values under urbanization pressure. Habitat maps for seven priority fauna species were developed using logistic regression models of species-habitat relationships and a bootstrapping methodology was used to evaluate model predictions. The modelled species were the koala, tiger quoll, squirrel glider, yellow-bellied glider, masked owl, powerful owl and sooty owl. Models ranked sites adequately in terms of habitat suitability and provided predictions of sufficient reliability for the purpose of identifying preliminary conservation priority areas. However, they are subject to multiple uncertainties and should not be viewed as a completely accurate representation of the distribution of species habitat. We recommend the use of model prediction in an adaptive framework whereby models are iteratively updated and refined as new data become available. [source]


Investigation of air gaps entrapped in protective clothing systems

FIRE AND MATERIALS, Issue 3 2002
Young Kim Il
Air gaps entrapped in protective clothing are known as one of the major factors affecting heat transfer through multiple layers of flexible clothing fabrics. The identification and quantification of the air gaps are two aspects of a multidisciplinary research effort directed toward improving the flame/thermal protective performance of the clothing. Today's three-dimensional (3-D) whole body digitizers, which provide accurate representations of the surface of the human body, can be a novel means for visualizing and quantifying the air gaps between the wearer and his clothing. In this paper we discuss how images from a 3-D whole body digitizer are used to determine local and global distributions of air gaps and the quantification of air gap sizes in single and multilayer clothing systems dressed on a thermal manikin. Examples are given that show concordance between air gap distributions and burn patterns obtained from full-scale manikin fire tests. We finish with a discussion of the application of air gap information to bench-scale testing to improve the protective performance of current flame/thermal protective clothing. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Visualizing polysemy using LSA and the predication algorithm

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 8 2010
Guillermo Jorge-Botana
Context is a determining factor in language and plays a decisive role in polysemic words. Several psycholinguistically motivated algorithms have been proposed to emulate human management of context, under the assumption that the value of a word is evanescent and takes on meaning only in interaction with other structures. The predication algorithm (Kintsch, 2001), for example, uses a vector representation of the words produced by LSA (Latent Semantic Analysis) to dynamically simulate the comprehension of predications and even of predicative metaphors. The objective of this study was to predict some unwanted effects that could be present in vector-space models when extracting different meanings of a polysemic word (predominant meaning inundation, lack of precision, and low-level definition), and propose ideas based on the predication algorithm for avoiding them. Our first step was to visualize such unwanted phenomena and also the effect of solutions. We use different methods to extract the meanings for a polysemic word (without context, vector sum, and predication algorithm). Our second step was to conduct an analysis of variance to compare such methods and measure the impact of potential solutions. Results support the idea that a human-based computational algorithm like the predication algorithm can take into account features that ensure more accurate representations of the structures we seek to extract. Theoretical assumptions and their repercussions are discussed. [source]