Considerable Gains (considerable + gain)

Distribution by Scientific Domains


Selected Abstracts


A two-scale domain decomposition method for computing the flow through a porous layer limited by a perforated plate

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2003
J. Dufręche
Abstract A two-scale domain decomposition method is developed in order to study situations where the macroscopic description of a given transport process in porous media does not represent a sufficiently good approximation near singularities (holes, wells, etc.). The method is based on a decomposition domain technique with overlapping. The governing equations at the scale of the microstructure are solved in the vicinity of the singularities whereas the volume averaged transport equations are solved at some distance of the singularities. The transfer of information from one domain to the other is performed using results of the method of volume averaging. The method is illustrated through the computation of the overall permeability of a porous layer limited by a perforated plate. As shown in the example treated, the method allows one to estimate the useful size of the microscopic region near the singularities. As illustrated in the paper, the method can lead to a considerable gain in memory requirement compared to a full direct simulation. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A markup model for forecasting inflation for the euro area

JOURNAL OF FORECASTING, Issue 7 2006
Bill Russell
Abstract We develop a small model for forecasting inflation for the euro area using quarterly data over the period June 1973 to March 1999. The model is used to provide inflation forecasts from June 1999 to March 2002. We compare the forecasts from our model with those derived from six competing forecasting models, including autoregressions, vector autoregressions and Phillips-curve based models. A considerable gain in forecasting performance is demonstrated using a relative root mean squared error criterion and the Diebold,Mariano test to make forecast comparisons.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


Constructing robust crew schedules with bicriteria optimization

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 3 2002
Matthias Ehrgott
Abstract Optimization-based computer systems are used by many airlines to solve crew planning problems by constructing minimal cost tours of duty. However, today airlines do not only require cost effective solutions, but are also very interested in robust solutions. A more robust solution is understood to be one where disruptions in the schedule (due to delays) are less likely to be propagated into the future, causing delays of subsequent flights. Current scheduling systems based solely on cost do not automatically provide robust solutions. These considerations lead to a multiobjective framework, as the maximization of robustness will be in conflict with the minimization of cost. For example crew changing aircraft within a duty period is discouraged if inadequate ground time is provided. We develop a bicriteria optimization framework to generate Pareto optimal schedules for the domestic airline. A Pareto optimal schedule is one which does not allow an improvement in cost and robustness at the same time. We developed a method to solve the bicriteria problem, implemented it and tested it with actual airline data. Our results show that considerable gain in robustness can be achieved with a small increase in cost. The additional cost is mainly due to an increase in overnights, which allows for a reduction of the number of aircraft changes. Copyright © 2003 John Wiley & Sons, Ltd. [source]


GaAs converters for high power densities of laser illumination

PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 4 2008
E. Oliva
Abstract Photovoltaic power converters can be used to generate electricity directly from laser light. In this paper we report the development of GaAs PV power converters with improved conversion efficiency at high power densities. The incorporation of a lateral conduction layer (LCL) on top of the window layer resulted in a considerable gain in efficiency at high illumination levels. Additional performance improvements were obtained by using a metal electrode grid design and antireflection coating optimised for monochromatic and inhomogeneous laser light. Maximum monochromatic (810,nm) optical-to-electrical conversion efficiency of 54·9% at 36·5,W/cm2 has been achieved. The characteristics of laser power converters with p/n and n/p polarity are discussed in this paper. Moreover, different materials and doping levels were applied in the LCL. The performance of these different device structures at high laser intensity is presented and discussed. It is shown that the lateral series resistance of the cell has a major impact on the overall device performance. Copyright © 2008 John Wiley & Sons, Ltd. [source]


On the effectiveness of runtime techniques to reduce memory sharing overheads in distributed Java implementations

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2008
Marcelo Lobosco
Abstract Distributed Java virtual machine (dJVM) systems enable concurrent Java applications to transparently run on clusters of commodity computers. This is achieved by supporting Java's shared-memory model over multiple JVMs distributed across the cluster's computer nodes. In this work, we describe and evaluate selective dynamic diffing and lazy home allocation, two new runtime techniques that enable dJVMs to efficiently support memory sharing across the cluster. Specifically, the two proposed techniques can contribute to reduce the overheads due to message traffic, extra memory space, and high latency of remote memory accesses that such dJVM systems require for implementing their memory-coherence protocol either in isolation or in combination. In order to evaluate the performance-related benefits of dynamic diffing and lazy home allocation, we implemented both techniques in Cooperative JVM (CoJVM), a basic dJVM system we developed in previous work. In subsequent work, we carried out performance comparisons between the basic CoJVM and modified CoJVM versions for five representative concurrent Java applications (matrix multiply, LU, Radix, fast Fourier transform, and SOR) using our proposed techniques. Our experimental results showed that dynamic diffing and lazy home allocation significantly reduced memory sharing overheads. The reduction resulted in considerable gains in CoJVM system's performance, ranging from 9% up to 20%, in four out of the five applications, with resulting speedups varying from 6.5 up to 8.1 for an 8-node cluster of computers. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Two-stage detection of partitioned random CDMA

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2008
Lukasz Krzymien
Random Code Division Multiple Access (CDMA) with low complexity two-stage joint detection/decoding is considered. A sequence partitioning approach is used for modulation, where every spreading sequence is divided into M sections (partitions) which are interleaved prior to transmission. This setup, called partitioned CDMA, can be understood as a generalisation of (chip) interleave division multiple access (IDMA). An analysis of a low-complexity iterative cancellation receiver is presented for arbitrary received power distributions. It is shown that for equal rate and equal power users the asymptotic performance of partitioned CDMA is equal to the performance of CDMA with optimal a posteriori probability (APP) detection for system loads K/N,<,1.49. Effects of asynchronous signal transmission are quantified for standard pulse shaping filters and it is shown that the signal-to-noise ratios achievable in an asynchronous system are improved with respect to fully synchronous transmission. The effect of unequal received powers is examined and considerable gains in performance are obtained by judicious choices of power distributions. For certain power distribution, partitioned CDMA with iterative detection can achieve arbitrary system loads, that is detection is no longer fundamentally interference limited. The practical near-far resistance of the proposed system is illustrated using an example of a receiver with a circular receive footprint and uniformly distributed transmitters (single cell system). Copyright © 2008 John Wiley & Sons, Ltd. [source]


Targeting fuel poverty in England: is the government getting warm?

FISCAL STUDIES, Issue 3 2002
Tom Sefton
Abstract This paper examines the cost-effectiveness of the new Home Energy Efficiency Scheme (HEES), a key component of the UK government's Fuel Poverty Strategy. The impact on the fuel poverty gap is simulated using data on a large-scale and representative sample of households in England. The scope for improving the scheme's targeting is considered by examining the optimal allocation of grants between households. The extent to which these potential gains might be achieved in practice using pragmatic criteria for distributing grants, and the implications of taking into account the dynamics of fuel poverty and the self-selection of grant applicants, are also explored. The current scheme is unlikely to have a very significant impact on fuel poverty, and considerable gains could be achieved by redesigning HEES, although the paper also highlights the difficulties involved in efficient targeting, including some additional complications not encountered in the analysis of more traditional anti-poverty measures. [source]


Incorporating covariates in mapping heterogeneous traits: a hierarchical model using empirical Bayes estimation

GENETIC EPIDEMIOLOGY, Issue 7 2007
Swati Biswas
Abstract Complex genetic traits are inherently heterogeneous, i.e., they may be caused by different genes, or non-genetic factors, in different individuals. So, for mapping genes responsible for these diseases using linkage analysis, heterogeneity must be accounted for in the model. Heterogeneity across different families can be modeled using a mixture distribution by letting each family have its own heterogeneity parameter denoting the probability that its disease-causing gene is linked to the marker map under consideration. A substantial gain in power is expected if covariates that can discriminate between the families of linked and unlinked types are incorporated in this modeling framework. To this end, we propose a hierarchical Bayesian model, in which the families are grouped according to various (categorized) levels of covariate(s). The heterogeneity parameters of families within each group are assigned a common prior, whose parameters are further assigned hyper-priors. The hyper-parameters are obtained by utilizing the empirical Bayes estimates. We also address related issues such as evaluating whether the covariate(s) under consideration are informative and grouping of families. We compare the proposed approach with one that does not utilize covariates and show that our approach leads to considerable gains in power to detect linkage and in precision of interval estimates through various simulation scenarios. An application to the asthma datasets of Genetic Analysis Workshop 12 also illustrates this gain in a real data analysis. Additionally, we compare the performances of microsatellite markers and single nucleotide polymorphisms for our approach and find that the latter clearly outperforms the former. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


Updating ARMA predictions for temporal aggregates

JOURNAL OF FORECASTING, Issue 4 2004
Sergio G. Koreisha
Abstract This article develops and extends previous investigations on the temporal aggregation of ARMA predications. Given a basic ARMA model for disaggregated data, two sets of predictors may be constructed for future temporal aggregates: predictions based on models utilizing aggregated data or on models constructed from disaggregated data for which forecasts are updated as soon as the new information becomes available. We show that considerable gains in efficiency based on mean-square-error-type criteria can be obtained for short-term predications when using models based on updated disaggregated data. However, as the prediction horizon increases, the gain in using updated disaggregated data diminishes substantially. In addition to theoretical results associated with forecast efficiency of ARMA models, we also illustrate our findings with two well-known time series. Copyright © 2004 John Wiley & Sons, Ltd. [source]