Various Scenarios (various + scenario)

Distribution by Scientific Domains


Selected Abstracts


An approach to combined rock physics and seismic modelling of fluid substitution effects

GEOPHYSICAL PROSPECTING, Issue 2 2002
Tor Arne Johansen
ABSTRACT The aim of seismic reservoir monitoring is to map the spatial and temporal distributions and contact interfaces of various hydrocarbon fluids and water within a reservoir rock. During the production of hydrocarbons, the fluids produced are generally displaced by an injection fluid. We discuss possible seismic effects which may occur when the pore volume contains two or more fluids. In particular, we investigate the effect of immiscible pore fluids, i.e. when the pore fluids occupy different parts of the pore volume. The modelling of seismic velocities is performed using a differential effective-medium theory in which the various pore fluids are allowed to occupy the pore space in different ways. The P-wave velocity is seen to depend strongly on the bulk modulus of the pore fluids in the most compliant (low aspect ratio) pores. Various scenarios of the microscopic fluid distribution across a gas,oil contact (GOC) zone have been designed, and the corresponding seismic properties modelled. Such GOC transition zones generally give diffuse reflection regions instead of the typical distinct GOC interface. Hence, such transition zones generally should be modelled by finite-difference or finite-element techniques. We have combined rock physics modelling and seismic modelling to simulate the seismic responses of some gas,oil zones, applying various fluid-distribution models. The seismic responses may vary both in the reflection time, amplitude and phase characteristics. Our results indicate that when performing a reservoir monitoring experiment, erroneous conclusions about a GOC movement may be drawn if the microscopic fluid-distribution effects are neglected. [source]


A Dynamic Integrated Analysis of Truck Tires in Western Europe

JOURNAL OF INDUSTRIAL ECOLOGY, Issue 2 2000
Pieter J. H. van Beukering
Summary By evaluating tires from a perspective of industrial metabolism, potential novel and practical ways to reduce their environmental impact can be found. This may be achieved by focusing on technological issues such as choosing materials, designing products, and recovering materials, or by looking at institutional and social barriers and incentives such as opening waste markets or changing consumer behavior. A model is presented for the life cycle of truck tires in Western Europe that is dynamic in nature and values both environmental and economic consequences. Various scenarios are simulated including longer tire lifetimes, better maintenance of tire pressure, increased use of less-expensive Asian tires, and increased use of fuel efficiency-enhancing tires ("eco-tires"). Tentative results indicate that, among other things, more than 95% of the overall environmental impact during the life of a tire occurs during the use of the tire, due to the impact of tires on automotive fuel efficiency. Better maintenance of tire pressure and use of eco-tires produce greater environmental and economics benefits than more-durable and/or less-expensive (Asian) tires. These results imply that the emphasis in environmental policies related to tires should shift from the production and the waste stages to the consumption stage. It also suggests that the focus on materials throughput and associated improvements through factor 4 or factor 10 advances in reduction in mass are less important than the quality of the tires and their management. [source]


Unanticipated impacts of spatial variance of biodiversity on plant productivity

ECOLOGY LETTERS, Issue 8 2005
Lisandro Benedetti-Cecchi
Abstract Experiments on biodiversity have shown that productivity is often a decelerating monotonic function of biodiversity. A property of nonlinear functions, known as Jensen's inequality, predicts negative effects of the variance of predictor variables on the mean of response variables. One implication of this relationship is that an increase in spatial variability of biodiversity can cause dramatic decreases in the mean productivity of the system. Here I quantify these effects by conducting a meta-analysis of experimental data on biodiversity,productivity relationships in grasslands and using the empirically derived estimates of parameters to simulate various scenarios of levels of spatial variance and mean values of biodiversity. Jensen's inequality was estimated independently using Monte Carlo simulations and quadratic approximations. The median values of Jensen's inequality estimated with the first method ranged from 3.2 to 26.7%, whilst values obtained with the second method ranged from 5.0 to 45.0%. Meta-analyses conducted separately for each combination of simulated values of mean and spatial variance of biodiversity indicated that effect sizes were significantly larger than zero in all cases. Because patterns of biodiversity are becoming increasingly variable under intense anthropogenic pressure, the impact of loss of biodiversity on productivity may be larger than current estimates indicate. [source]


Method for using complete and incomplete trios to identify genes related to a quantitative trait,

GENETIC EPIDEMIOLOGY, Issue 1 2004
Emily O. Kistner
Abstract A number of tests for linkage and association with qualitative traits have been developed, with the most well-known being the transmission/disequilibrium test (TDT). For quantitative traits, varying extensions of the TDT have been suggested. The quantitative trait approach we propose is based on extending the log-linear model for case-parent trio data (Weinberg et al. [1998] Am. J. Hum. Genet. 62:969,978). Like the log-linear approach for qualitative traits, our proposed polytomous logistic approach for quantitative traits allows for population admixture by conditioning on parental genotypes. Compared to other methods, simulations demonstrate good power and robustness of the proposed test under various scenarios of the genotype effect, distribution of the quantitative trait, and population stratification. In addition, missing parental genotype data can be accommodated through an expectation-maximization (EM) algorithm approach. The EM approach allows recovery of most of the lost power due to incomplete trios. Published 2004 Wiley-Liss, Inc. [source]


Delineating the rupture planes of an earthquake doublet using Source-Scanning Algorithm: application to the 2005 March 3 Ilan Doublet, northeast Taiwan

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2010
Chih-Wen Kan
SUMMARY Correct identification of the fault plane(s) associated with an earthquake doublet is a very challenging problem because the pair of events often occurs in close space and time with almost the same magnitude. Most long-period waveforms of an earthquake doublet are severely tangled and thus unsuitable for conventional waveform inversion methods. In this study, we try to resolve this issue by utilizing the recently developed Source-Scanning Algorithm (SSA). The SSA systematically searches the model space for seismic sources whose times and locations are most compatible with the observed arrivals of large amplitudes on seismograms. The identification of a seismic source is based on the brightness function, which is defined as the summation of the normalized waveform amplitudes at the predicted arrival times at all stations. By illuminating the spatiotemporal distribution of asperities during an earthquake's source process, we are able to constrain the orientation of the rupture propagation that, in turn, leads to the identification of the fault plane. A series of synthetic experiments are performed to test SSA's resolution under various scenarios including different directions of rupture propagation, imperfect station coverage and short origin time difference between the two events of a doublet. Because only short-period records are needed in the analysis, the proposed method is best suited for an earthquake doublet with a short time gap between the two events. Using the 2005 Ilan doublet (the origin time difference is only 70 s) that occurred in northeast Taiwan as an example, we show that the trace of the brightest spots moves towards the west and infer the E,W-striking plane to be the actual fault plane. [source]


Analyzing GPS signals to investigate path diversity effects of non-geostationary orbit satellite communication systems

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 6 2002
Hsin-Piao Lin
Abstract The concept behind path diversity is that a user who can access several satellites simultaneously will be able to communicate more effectively than a user who could only access one. The success of this method depends on the environment, the satellite constellation, and diversity combining technology. This paper explores the path diversity effects of non-geostationary orbit (NGO) satellite personal communication services, for different degrees of user mobility, under various scenarios, using the constellation of the global positioning system (GPS). Measurements are taken near downtown Taipei. Three types of mobilities (fixed-point, pedestrian, and vehicular) are examined, and the switch diversity and maximum ratio combining method are applied to determine the path diversity gain and calculate bit error probability. The error probability performance of applying diversity schemes in coherent binary phase shift keying (BPSK) and non-coherent differential phase shift keying (DPSK) modulations over Rician fading channels are also analysed and evaluated by using the characteristic function method. The results show that fading can be significantly reduced and diversity greatly increased. A significant diversity gain and improvement in bit error rate (BER) can be expected in all cases by simply applying switch diversity scheme. Besides, for the maximum ratio combining method, the results imply that summing two satellite signals suffices to increase diversity and improve the bit error rate performance. Copyright © 2002 John Wiley & Sons, Ltd. [source]


An optimal water allocation for an irrigation district in Pingtung County, Taiwan,

IRRIGATION AND DRAINAGE, Issue 3 2009
Yun Cheng
allocation optimale de l'eau; utilisation conjointe; programmation linéaire Abstract This paper presents a linear programming model to study the conjunctive use of surface water and groundwater for optimal water allocation in Taiwan. Increasing demand for water emphasizes the proper need for effective planning and development of irrigated resources. A groundwater simulation model was performed to construct the hydrogeological structure of the regional Pingtung Plain in the southwest part of Taiwan and the optimal withdrawal of three irrigation areas in Pingtung Plain was analysed. The optimal ratios for allocating water of three canals are analysed in this research. The optimal distribution rate of each canal depends on the season, irrigation methods and crops, which are two paddy rice and one upland crop. After simulation of various scenarios, optimal simulation results show that the minimal amount of required groundwater and the maximum amount of excess water amounts in the area can be satisfied by current agricultural practices. Copyright © 2008 John Wiley & Sons, Ltd. Cet article présente un modèle de programmation linéaire pour étudier l'utilisation conjointe de l'eau de surface et des eaux souterraines pour l'allocation optimale de l'eau à Taiwan. La demande croissante d'eau souligne le besoin de planification effective et de développement des ressources pour l'irrigation. Un modèle hydrogéologique a été construit pour représenter la structure de la plaine Pingtung dans le sud-ouest de Taiwan et pour analyser le prélèvement optimal sur trois périmètres irrigués de la plaine de Pingtung. Les ratios optimaux pour allouer l'eau aux trois canaux sont analysés dans cette recherche. Le taux optimal de distribution de chaque canal dépend de la saison, des méthodes d'irrigation et des cultures qui sont ici deux récoltes de riz et une culture de montagne. Après la simulation de différents scénarios, les résultats optimisés montrent que la quantité minimale d'eau souterraine exigée ajoutée à la quantité d'eau en excès disponible dans le secteur peut satisfaire les pratiques agricoles actuelles. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A Cooperative Game Theory of Noncontiguous Allies

JOURNAL OF PUBLIC ECONOMIC THEORY, Issue 4 2001
Daniel G. Arce M.
This paper develops a cooperative game-theoretic representation of alliances with noncontiguous members that is based on cost savings from reducing overlapping responsibilities and sequestering borders. For various scenarios, three solutions (the Shapley value, nucleolus, and core's centroid) are found and compared. Even though their underlying ethical norm varies, the solutions are often identical for cases involving contiguous allies and for rectangular arrays of noncontiguous allies. When transaction costs and/or alternative spatial configurations are investigated, they may then differ. In all cases the cooperative approach leads to a distribution of alliance costs that need not necessarily coincide with the traditional emphasis on gross domestic product size as a proxy for deterrence value (the exploitation hypothesis). Instead, burdens can now be defined based upon a country's spatial and strategic location within the alliance. [source]


Calculating power for the comparison of dependent , -coefficients

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2003
Hung-Mo Lin
Summary. In the psychosocial and medical sciences, some studies are designed to assess the agreement between different raters and/or different instruments. Often the same sample will be used to compare the agreement between two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. Although sample size calculations have become an important element in the design of research projects, such methods for agreement studies are scarce. We adapt the generalized estimating equations approach for modelling dependent , -statistics to estimate the sample size that is required for dependent agreement studies. We calculate the power based on a Wald test for the equality of two dependent , -statistics. The Wald test statistic has a non-central ,2 -distribution with non-centrality parameter that can be estimated with minimal assumptions. The method proposed is useful for agreement studies with two raters and two instruments, and is easily extendable to multiple raters and multiple instruments. Furthermore, the method proposed allows for rater bias. Power calculations for binary ratings under various scenarios are presented. Analyses of two biomedical studies are used for illustration. [source]


Industrial Specialization, Catching-up and Labour Market Dynamics

METROECONOMICA, Issue 1 2000
Michael A. Landesmann
This paper presents a dynamic model as a heuristic tool to discuss some issues of changing industrial specialization which arise in the context of catching-up processes of (technologically) less advanced economies and the impact which various scenarios of such catching-up processes might have on the labour market dynamics both in the advanced and in the catching-up economies. In analysing the evolution of international specialization, we demonstrate the twin pressures exerted upon the industrial structures of "northern" economies: competition from "type-A southern" economies, which maintain a comparative competitive strength in labour-intensive and less skill-intensive branches, and competition from "type-B catching-up" economies, whose catching-up increasingly focuses upon branches in which the initial productivity gaps and hence the scope for catching-up are the highest. The contrast between these two catching-up scenarios allows the explicit analysis of the implications of "comparative advantage switchovers" between northern and southern (type B) economies for labour market dynamics. [source]


Comparing the accuracy and precision of three techniques used for estimating missing landmarks when reconstructing fossil hominin crania

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 1 2009
Rudolph Neeser
Abstract Various methodological approaches have been used for reconstructing fossil hominin remains in order to increase sample sizes and to better understand morphological variation. Among these, morphometric quantitative techniques for reconstruction are increasingly common. Here we compare the accuracy of three approaches,mean substitution, thin plate splines, and multiple linear regression,for estimating missing landmarks of damaged fossil specimens. Comparisons are made varying the number of missing landmarks, sample sizes, and the reference species of the population used to perform the estimation. The testing is performed on landmark data from individuals of Homo sapiens, Pan troglodytes and Gorilla gorilla, and nine hominin fossil specimens. Results suggest that when a small, same-species fossil reference sample is available to guide reconstructions, thin plate spline approaches perform best. However, if no such sample is available (or if the species of the damaged individual is uncertain), estimates of missing morphology based on a single individual (or even a small sample) of close taxonomic affinity are less accurate than those based on a large sample of individuals drawn from more distantly related extant populations using a technique (such as a regression method) able to leverage the information (e.g., variation/covariation patterning) contained in this large sample. Thin plate splines also show an unexpectedly large amount of error in estimating landmarks, especially over large areas. Recommendations are made for estimating missing landmarks under various scenarios. Am J Phys Anthropol 2009. © 2009 Wiley-Liss, Inc. [source]


The Weighted Generalized Estimating Equations Approach for the Evaluation of Medical Diagnostic Test at Subunit Level

BIOMETRICAL JOURNAL, Issue 5 2006
Carol Y. Lin
Abstract Sensitivity and specificity are common measures used to evaluate the performance of a diagnostic test. A diagnostic test is often administrated at a subunit level, e.g. at the level of vessel, ear or eye of a patient so that the treatment can be targeted at the specific subunit. Therefore, it is essential to evaluate the diagnostic test at the subunit level. Often patients with more negative subunit test results are less likely to receive the gold standard tests than patients with more positive subunit test results. To account for this type of missing data and correlation between subunit test results, we proposed a weighted generalized estimating equations (WGEE) approach to evaluate subunit sensitivities and specificities. A simulation study was conducted to evaluate the performance of the WGEE estimators and the weighted least squares (WLS) estimators (Barnhart and Kosinski, 2003) under a missing at random assumption. The results suggested that WGEE estimator is consistent under various scenarios of percentage of missing data and sample size, while the WLS approach could yield biased estimators due to a misspecified missing data mechanism. We illustrate the methodology with a cardiology example. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]