Home About us Contact | |||
New Scheme (new + scheme)
Selected AbstractsNumerical simulation of the miscible displacement of radionuclides in a heterogeneous porous mediumINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2005C.-H. Bruneau Abstract The aim of this paper is to model and simulate the displacement of radioactive elements in a saturated heterogeneous porous medium. New schemes are proposed to solve accurately the convection,diffusion,reaction equations including nonlinear terms in the time derivative. Numerical tests show the stability and robustness of these schemes through strong heterogeneities of the medium. Finally the COUPLEX 1 benchmark concerning the far field simulation of a polluted flow by a leak of a nuclear waste disposal is performed and compared with the results available in the literature. Copyright © 2005 John Wiley & Sons, Ltd. [source] Geodesic-Controlled Developable Surfaces for Modeling Paper BendingCOMPUTER GRAPHICS FORUM, Issue 3 2007Pengbo Bo Abstract We present a novel and effective method for modeling a developable surface to simulate paper bending in interactive and animation applications. The method exploits the representation of a developable surface as the envelope of rectifying planes of a curve in 3D, which is therefore necessarily a geodesic on the surface. We manipulate the geodesic to provide intuitive shape control for modeling paper bending. Our method ensures a natural continuous isometric deformation from a piece of bent paper to its flat state without any stretching. Test examples show that the new scheme is fast, accurate, and easy to use, thus providing an effective approach to interactive paper bending. We also show how to handle non-convex piecewise smooth developable surfaces. [source] Tour Into the Picture using a Vanishing Line and its Extension to Panoramic ImagesCOMPUTER GRAPHICS FORUM, Issue 3 2001Hyung Woo Kang Tour into the picture (TIP) proposed by Horry et al.13 is a method for generating a sequence of walk-through images from a single reference picture (or image). By navigating a 3D scene model constructed from the picture, TIP produces convincing 3D effects. Assuming that the picture has one vanishing point, they proposed the scene modeling scheme called spidery mesh. However, this scheme has to go through major modification when the picture contains multiple vanishing points or does not have any well-defined vanishing point. Moreover, the spidery mesh is hard to generalize for other types of images such as panoramic images. In this paper, we propose a new scheme for TIP which is based on a single vanishing line instead of a vanishing point. Based on projective geometry, our scheme is simple and yet general enough to address the problems faced with the previous method. We also show that our scheme can be naturally extended to a panoramic image. [source] Exploring the performance of massively multithreaded architecturesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2010Shahid Bokhari Abstract We present a new scheme for evaluating the performance of multithreaded computers and demonstrate its application to the Cray MTA-2 and XMT supercomputers. Our scheme is based on the concept of clock cycles per element, , plotted against both problem size and the number of processors. This scheme clearly shows if an implementation has achieved its asymptotic efficiency and is more general than (but includes) the commonly used speedup metric. It permits the discovery of any imperfections in both the software as well as the hardware, and is expected to permit a unified comparison of many different parallel architectures. Measurements on a number of well-known parallel algorithms, ranging from matrix multiply to quicksort, are presented for the MTA-2 and XMT and highlight some interesting differences between these machines. The performance of sequence alignment using dynamic programming is evaluated on the MTA-2, XMT, IBM x3755 and SGI Altix 350 and provides a useful comparison of the capabilities of the Cray machines with more conventional shared memory architectures. Copyright © 2009 John Wiley & Sons, Ltd. [source] Distributed loop-scheduling schemes for heterogeneous computer systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006Anthony T. Chronopoulos Abstract Distributed computing systems are a viable and less expensive alternative to parallel computers. However, a serious difficulty in concurrent programming of a distributed system is how to deal with scheduling and load balancing of such a system which may consist of heterogeneous computers. Some distributed scheduling schemes suitable for parallel loops with independent iterations on heterogeneous computer clusters have been designed in the past. In this work we study self-scheduling schemes for parallel loops with independent iterations which have been applied to multiprocessor systems in the past. We extend one important scheme of this type to a distributed version suitable for heterogeneous distributed systems. We implement our new scheme on a network of computers and make performance comparisons with other existing schemes. Copyright © 2005 John Wiley & Sons, Ltd. [source] The Wealth of Nations at the Turn of the Millennium: A Classification System Based on the International Division of Labor,ECONOMIC GEOGRAPHY, Issue 2 2002Wolfgang Hoeschele Abstract: Simple dichotomies, such as First World,Third World, developed,developing countries, and north,south, are no longer adequate for understanding the complex economic geography of the world. Even the division into core, semi-periphery, and periphery groups diverse economies into an excessively limited number of categories. It is time to develop a new scheme that better classifies the countries of the world into coherent groups. This article constructs a new classification based on the international division of labor, using three fundamental dimensions. The first dimension is the success of the industrial and services economy in providing employment to the people within a country. The second is the export orientation of a country, concentrating either on natural-resource-intensive products (e.g., agricultural produce, food and beverages, minerals and metals) or on core industrial manufactures (from textiles to computers). The third is the presence of control functions in the world economy: countries that include the headquarters of major firms and are the source regions of major flows of foreign direct investments. The combination of these three dimensions leads to the creation of eight basic categories. I introduce a terminology that combines these basic categories into larger groups, depending on the context. This new conceptual scheme should facilitate a more informed analysis of world economic, political, social, and environmental affairs. [source] A method of new filter design based on the co-occurrence histogramELECTRICAL ENGINEERING IN JAPAN, Issue 1 2009Takayuki Fujiwara Abstract We have proposed that the co-occurrence frequency image (CFI) based on the co-occurrence frequency histogram of the gray value of an image can be used in a new scheme for image feature extraction. This paper proposes new enhancement filters to achieve sharpening and smoothing of images. These filters are very similar in result but quite different in process from those which have been used previously. Thus, we show the possibility of a new paradigm for basic image enhancement filters making use of the CFI. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 166(1): 36,42, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20699 [source] Variable smoothing in Bayesian intrinsic autoregressionsENVIRONMETRICS, Issue 8 2007Mark J. Brewer Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source] A practical grid-based method for tracking multiple refraction and reflection phases in three-dimensional heterogeneous mediaGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006M. De Kool SUMMARY We present a practical grid-based method in 3-D spherical coordinates for computing multiple phases comprising any number of reflection and transmission branches in heterogeneous layered media. The new scheme is based on a multistage approach which treats each layer that the wave front enters as a separate computational domain. A finite-difference eikonal solver known as the fast-marching method (FMM) is reinitialized at each interface to track the evolving wave front as either a reflection back into the incident layer or a transmission through to the adjacent layer. Unlike the standard FMM, which only finds first arrivals, this multistage approach can track those later arriving phases explicitly caused by the presence of discontinuities. Notably, the method does not require an irregular mesh to be constructed in order to connect interface nodes to neighbouring velocity nodes which lie on a regular grid. To improve accuracy, local grid refinement is used in the neighbourhood of a source point where wave front curvature is high. The method also provides a way to trace reflections from an interface that are not the first arrival (e.g. the global PP phase). These are computed by initializing the multistage FMM from both the source and receiver, propagating the two wave fronts to the reflecting interface, and finding stationary points of the sum of the two traveltime fields on the reflecting interface. A series of examples are presented to test the efficiency, accuracy and robustness of the new scheme. As well as efficiently computing various global phases to an acceptable accuracy through the ak135 model, we also demonstrate the ability of the scheme to track complex crustal phases that may be encountered in coincident reflection, wide-angle reflection/refraction or local earthquake surveys. In one example, a variety of phases are computed in the presence of a realistic subduction zone, which includes several layer pinch-outs and a subducting slab. Our numerical tests show that the new scheme is a practical and robust alternative to conventional ray tracing for finding various phases in layered media at a variety of scales. [source] Quasi-wavelet solution of diffusion problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 12 2004Tang Jiashi Abstract A new method, quasi-wavelet method, is introduced for solving partial differential equations of diffusion which are important to chemical and mechanical engineering. A new scheme for the extension of boundary conditions is proposed. The quasi-wavelet method is utilized to discretize the spatial derivatives, while the Runge,Kutta scheme is employed for the time advancing. The problems of particle diffusion in the electrochemistry reaction and temperature diffusion in plates are studied. Quasi-wavelet solution of the former problem is compared with those of a finite difference method. Solution of the latter problem is calibrated by analytical solution. Numerical results indicate that the quasi-wavelet approach is very robust and efficient for diffusion problems. Copyright © 2004 John Wiley & Sons, Ltd. [source] A new scheme for designing the penalty factor in 3-D penalty-equilibrating mixed elementsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2004Yan Ping Cao Abstract In this paper, a new scheme for designing the penalty factor in 3-D penalty-equilibrating mixed elements based on the Hu,Washizu three-field variational functional is proposed to improve the performance of the elements when applied to beam, plate and shell structures. In order to construct this new scheme, the role played by the penalty factor is first discussed in detail by comparing it with the selective reduced factor designed by Sze for overcoming the so-called ,trapezoid locking'. The reason of the poor performance of the penalty-equilibrating element for the distorted elemental geometry is investigated thoroughly. Furthermore, the penalty factor is designed to alleviate the influence of false strain/stress in elements by considering the geometrical characteristics of beam, plate and shell structures. The new scheme is applied to the penalty-equilibrating 3-D mixed element proposed by the present authors previously. Some challenging numerical examples are selected to demonstrate the effectiveness of the present approach. Copyright © 2004 John Wiley & Sons, Ltd. [source] Conjugate filter approach for shock capturing,INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 2 2003Yun Gu Abstract This paper introduces a new scheme for the numerical computation involving shock waves. The essence of the scheme is to adaptively implement a conjugate low-pass filter to effectively remove the accumulated numerical errors produced by a set of high-pass filters. The advantages of using such an adaptive algorithm are its controllable accuracy, relatively low cost and easy implementation. Numerical examples in one and two space dimensions are presented to illustrate the proposed scheme. Copyright © 2003 John Wiley & Sons, Ltd. [source] A new algorithm of time stepping in the non-linear dynamic analysisINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 9 2001Yang Haitian Abstract This paper presents a new scheme of time stepping for solving non-linear dynamic problems. By expanding variables in a discretized time interval, FEM-based recurrent formulae are derived leading to a self-adaptive algorithm for different sizes of time steps. There will be no requirement of iteration for the non-linear solutions. Numerical validation shows satisfactory results. Copyright © 2001 John Wiley & Sons, Ltd. [source] Compression of time-generated matrices in two-dimensional time-domain elastodynamic BEM analysisINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2004D. Soares Jr Abstract This paper describes a new scheme to improve the efficiency of time-domain BEM algorithms. The discussion is focused on the two-dimensional elastodynamic formulation, however, the ideas presented apply equally to any step-by-step convolution based algorithm whose kernels decay with time increase. The algorithm presented interpolates the time-domain matrices generated along the time-stepping process, for time-steps sufficiently far from the current time. Two interpolation procedures are considered here (a large number of alternative approaches is possible): Chebyshev,Lagrange polynomials and linear. A criterion to indicate the discrete time at which interpolation should start is proposed. Two numerical examples and conclusions are presented at the end of the paper. Copyright © 2004 John Wiley & Sons, Ltd. [source] A freeform shape optimization of complex structures represented by arbitrary polygonal or polyhedral meshesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004Jie Shen Abstract In this paper we propose a new scheme for freeform shape optimization on arbitrary polygonal or polyhedral meshes. The approach consists of three main steps: (1) surface partitioning of polygonal meshes into different patches; (2) a new freeform perturbation scheme of using the Cox,de Boor basis function over arbitrary polygonal meshes, which supports multi-resolution shape optimization and does not require CAD information; (3) freeform shape optimization of arbitrary polygonal or polyhedral meshes. Numerical experiments indicate the effectiveness of the proposed approach. Copyright © 2004 John Wiley & Sons, Ltd. [source] An enhanced polygonal finite-volume method for unstructured hybrid meshesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2007Hyung Taek Ahn Abstract Irregular hybrid meshes may excessively distort the node-dual finite-volume discretization. A new scheme is formulated that uses a different type of polygonal control volume. Superior stability of the polygonal scheme over the conventional node-dual scheme is demonstrated on representative irregular hybrid meshes for incompressible viscous flow past a circular cylinder. Copyright © 2006 John Wiley & Sons, Ltd. [source] Convergence of control performance by unfalsification of models,levels of confidenceINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 5 2001S. M. Veres Abstract A general framework is introduced for iterative/adaptive controller design schemes by model unfalsification. An important feature of the schemes is their convergence near to the best possible controller given a set of model and controller structures. The problem of stability assured controller tuning is examined through unfalsified Riemannian bands of the Nyquist plot. Instability tolerant H, and l1 -norm-based controller tuning schemes are introduced. Computational problems are discussed and a simulation is used to illustrate the new scheme. Copyright © 2001 John Wiley & Sons, Ltd. [source] FLSAC: A new scheme to defend against greedy behavior in wireless mesh networksINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2009Soufiene Djahel Abstract The most commonly used medium access mechanism in wireless mesh networks is based on the CSMA/CA protocol. This protocol schedules properly the access to the medium among all the competing nodes. However, in a hostile environment, such as wireless mesh networks (WMNs), selfish or greedy behaving nodes may prefer to decline the proper use of the protocol rules in order to increase their bandwidth shares at the expense of the well-behaving nodes. In this paper, we focus on such misbehavior and in particular on the adaptive greedy misbehavior of a node in the context of WMN environment. In such environment, wireless nodes compete to gain access to the medium and communicate with a mesh router (MR). In this case, a greedy node may violate the protocol rules in order to earn extra bandwidth share upon its neighbors. In order to avoid its detection, this node may adopt different techniques and switch dynamically between each of them. To counter such misbehavior, we propose to use a fuzzy logic-based detection scheme. This scheme, dubbed FLSAC, is implemented in the MR/gateway to monitor the behavior of the attached wireless nodes and report any deviation from the proper use of the protocol. The simulation results of the proposed FLSAC scheme show robustness and its ability to detect and identify quickly any adaptive cheater. Copyright © 2009 John Wiley & Sons, Ltd. [source] Space-time ring-TCM codes with CPM based on the decomposed model for transmission over Rayleigh fading channelsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2006A. Pereira Abstract Space,time (ST) coding is a proved technique for achieving high data rates in 3G mobile systems that combines coding, modulation and multiple transmitters and receivers. A novel algorithm is proposed for ST ring trellis-coded modulation (ST-RTCM) systems with continuous-phase modulation (CPM) when the channel coefficients are known to the receiver. This algorithm is based on the CPM decomposed model, which exploits the memory properties of this modulation method, resulting in a straightforward implementation of joint ST coding and CPM, which is particularly suitable for ring codes. This new scheme is used to investigate the performance of the delay diversity code with CPM over slow Rayleigh fading channels, in particular with MSK which is one of the most widely used modulation methods of continuous phase. Furthermore, a feedback version of delay diversity allowed by the decomposition is tested in 1REC and 1RC systems. This feedback configuration is seen to provide good results for low signal-to-noise ratios. Simulations results are also provided for multilevel ST-RTCM codes that achieve a higher throughput than MSK-coded systems. Additionally the serial concatenation of an outer Reed,Solomon code with an ST-RTCM code is shown, this combination further reduces the error probability and achieves even more reliable communications. Copyright © 2005 John Wiley & Sons, Ltd. [source] An adaptive path routing scheme for satellite IP networksINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2003Jing Chen Abstract Mobile satellites can be considered as the promising solution to the global IP network. In order to provide quality of service (QoS) in future networks, mobile satellite can be integrated with the asynchronous transfer mode (ATM) to switch IP datagrams in the space. For such a network, new and sophisticated routing and handoff algorithms are essential. In this paper, a new scheme called adaptive path routing scheme (APRS) is proposed. It is shown that the APRS can provide superior performance for routing and handoff in mobile satellite networks compared with conventional schemes. Copyright © 2003 John Wiley & Sons, Ltd. [source] Review of the UK NEQAS (H) digital morphology pilot scheme for continuing professional development accessed via the internetINTERNATIONAL JOURNAL OF LABORATORY HEMATOLOGY, Issue 5 2008M. L. BRERETON Summary UK NEQAS (H) developed and instigated a pilot scheme for digital morphology, which was accessed by participants over the internet in order to assess the viability of using high quality images as an educational tool for continuing professional development. The pilot scheme was trialled over a 2-year period with eight releases totalling 16 morphology cases. Digital images allowed participating individuals to examine and comment on exactly the same cells and compare their findings with those of other participants, consensus data from traditional glass slide surveys and expert opinion. Feedback from participants on their experience was then relayed back to the development team by UK NEQAS (H) in order to drive the educational format and to ensure that any new scheme would meet the requirements of the users. [source] A dynamic key management solution to access hierarchyINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2007Xukai Zou Hierarchical access control (HAC) has been a fundamental problem in computer and network systems. Since Akl and Taylor proposed the first HAC scheme based on number theory in 1983, cryptographic key management techniques for HAC have appeared as a new and promising class of solutions to the HAC problem. Many cryptographic HAC schemes have been proposed in the past two decades. One common feature associated with these schemes is that they basically limited dynamic operations at the node level. In this paper, by introducing the innovative concept of ,access polynomial' and representing a key value as the sum of two polynomials in a finite field, we propose a new key management scheme for dynamic access hierarchy. The newly proposed scheme supports full dynamics at both the node level and user level in a uniform yet efficient manner. Furthermore, the new scheme allows access hierarchy to be a random structure and can be flexibly adapted to many other access models such as ,transfer down' and ,depth-limited transfer'. Copyright © 2007 John Wiley & Sons, Ltd. [source] Optimization of strong and weak coordinatesINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 12 2006Marcel Swart Abstract We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation significantly accelerates the optimization of these coordinates, and thus of the overall geometry. An adapted version of the delocalized coordinates setup is used to generate automatically a set of internal coordinates that is shown to perform well for the geometry optimization of systems with weak and strong coordinates. For the Baker test set of 30 molecules, we need only 173 geometry cycles with PW91/TZ2P calculations, which compares well with the best previous attempts reported in literature. For the localization of transition state structures, we generate the initial Hessian matrix, using appropriate force constants from a database. In this way, one avoids the explicit computation of the Hessian matrix. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2006 [source] Improved Arrhythmia Detection in Implantable Loop RecordersJOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 9 2008MICHELE BRIGNOLE M.D. Introduction: Implantable loop recorders (ILR) have an automatic arrhythmia detection feature that can be compromised by inappropriately detected episodes. This study evaluated a new ILR sensing and detection scheme for automatically detecting asystole, bradyarrhythmia, and tachyarrhythmia events, which is implemented in the next generation device (Reveal DX/XT). Methods and Results: The new scheme employs an automatically adjusting R-wave sensing threshold, enhanced noise rejection, and algorithms to detect asystole, bradyarrhythmia, and tachyarrhythmia. Performance of the new algorithms was evaluated using 2,613 previously recorded, automatically detected Reveal Plus episodes from 533 patients. A total of 71.9% of episodes were inappropriately detected by the original ILR, and at least 88.6% of patients had one or more inappropriate episodes, with most inappropriate detections due to R-wave amplitude reductions, amplifier saturation, and T-wave oversensing. With the new scheme, inappropriate detections were reduced by 85.2% (P < 0.001), with a small reduction in the detection of appropriate episodes (1.7%, P < 0.001). The new scheme avoided inappropriate detections in 67.4% of patients that had them with the original scheme. Conclusions: The new sensing and detection scheme is expected to substantially reduce the occurrence of inappropriately detected episodes, relative to that of the original ILR. [source] Improved tapered slot-line antennas by using grating loadingMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 3 2010Peng Zhang Abstract A new scheme for improving linearly tapered slot-line antenna (LTSA) has been proposed in this article. By setting five metallic strips as a grating into the slot-area of LTSA, a 2 dB increment of gain over the frequency range of (6.0,9.5) GHz and the bandwidth of (5.3,9.6) GHz for VSWR <2.0:1 are obtained. © 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52: 728,731, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.25002 [source] Simple Relationship for Predicting Impact Sensitivity of Nitroaromatics, Nitramines, and NitroaliphaticsPROPELLANTS, EXPLOSIVES, PYROTECHNICS, Issue 2 2010Hossein Keshavarz, Mohammad Abstract This paper describes the development of a simple model for predicting the impact sensitivity of nitroaromatics, benzofuroxans, nitroaromatics with ,-CH, nitramines, nitroaliphatics, nitroaliphatics containing other functional groups, and nitrate energetic compounds using their molecular structures. The model is optimized using a set of 86 explosives for which different structural parameters exist. The model is applied to a test set of 120 explosives from a variety of the mentioned chemical families in order to confirm the reliability of a new method. Elemental composition and two specific structural parameters, that can increase or decrease impact sensitivity, would be needed in this new scheme. The predicted impact sensitivities for both sets have a root mean square (rms) of deviation from experiment of 23,cm, which shows good agreement with respect to the measured values as compared to the best available empirical correlations. [source] An improved PDF cloud scheme for climate simulationsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 651 2010Akira Kuwano-Yoshida Abstract An efficient grid-scale cloud scheme for climate simulation is implemented in the atmospheric general circulation model for the Earth Simulator (AFES). The new cloud scheme uses statistical partial condensation using joint-Gaussian probability distribution functions (PDFs) of the liquid water potential temperature and total water content, with standard deviations estimated by the moist Mellor,Yamada level-2 turbulence scheme. It also adopts improved closure parameters based on large-eddy simulations and a revised mixing length that varies with the stability and turbulent kinetic energy. These changes not only enable better representation of low-level boundary layer clouds, but also improve the atmospheric boundary layer structure. Sensitivity experiments for vertical resolution suggest that O(100,200 m) intervals are adequate to represent well-mixed boundary layers with the new scheme. The new scheme performs well at relatively low horizontal resolution (about 150 km), although inversion layers near the coast become more intense at a higher horizontal resolution (about 50 km). Copyright © 2010 Royal Meteorological Society [source] Model error and sequential data assimilation: A deterministic formulationTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 634 2008A. Carrassi Abstract Data assimilation schemes are confronted with the presence of model errors arising from the imperfect description of atmospheric dynamics. These errors are usually modelled on the basis of simple assumptions such as bias, white noise, and first-order Markov process. In the present work, a formulation of the sequential extended Kalman filter is proposed, based on recent findings on the universal deterministic behaviour of model errors in marked contrast with previous approaches. This new scheme is applied in the context of a spatially distributed system proposed by Lorenz. First, it is found that, for short times, the estimation error is accurately approximated by an evolution law in which the variance of the model error (assumed to be a deterministic process) evolves according to a quadratic law, in agreement with the theory. Moreover, the correlation with the initial condition error appears to play a secondary role in the short-time dynamics of the estimation error covariance. Second, the deterministic description of the model error evolution, incorporated into the classical extended Kalman filter equations, reveals that substantial improvements of the filter accuracy can be gained compared with the classical white-noise assumption. The universal short-time quadratic law for the evolution of the model error covariance matrix seems very promising for modelling estimation error dynamics in sequential data assimilation. Copyright © 2008 Royal Meteorological Society [source] Semi-Lagrangian advection scheme with controlled damping: An alternative to nonlinear horizontal diffusion in a numerical weather prediction modelTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 631 2008Filip Vá Abstract This paper proposes a nonlinear horizontal diffusion scheme for models using semi-Lagrangian formulations. The scheme is made flow dependent and not entirely linked to the model levels. As an extension, the implementation of the scheme to the model Aladin is given. The damping abilities of interpolation are used for the diffusion filtering. The aim is to provide a horizontal diffusion scheme of similar stability and computational efficiency as the existing linear spectral diffusion scheme in Aladin. Preserving such qualities, the new scheme brings beneficial new skills to the model. The differences between the performances of the two diffusion schemes are examined and discussed. Finally, some interesting case-studies simulated with both horizontal diffusion schemes are presented. Copyright © 2008 Royal Meteorological Society [source] A convection scheme for data assimilation: Description and initial testsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 606 2005Philippe Lopez Abstract A new simplified parametrization of subgrid-scale convective processes has been developed and tested in the framework of the ECMWF Integrated Forecasting System for the purpose of variational data assimilation, singular vector calculations and adjoint sensitivity experiments. Its formulation is based on the full nonlinear convection scheme used in ECMWF forecasts, but a set of simplifications has been applied to substantially improve its linear behaviour. These include the specification of a single closure assumption based on convective available potential energy, the uncoupling of the equations for the convective mass flux and updraught characteristics and a unified formulation of the entrainment and detrainment rates. Simplified representations of downdraughts and momentum transport are also included in the new scheme. Despite these simplifications, the forecasting ability of the new convective parametrization is shown to remain satisfactory even in seasonal integrations. A detailed study of its Jacobians and the validity of the linear hypothesis is presented. The new scheme is also tested in combination with the new simplified parametrization of large-scale clouds and precipitation recently developed at ECMWF. In contrast with the simplified convective parametrization currently used in ECMWF's operational 4D-Var, its tangent-linear and adjoint versions account for perturbations of all convective quantities including convective mass flux, updraught characteristics and precipitation fluxes. Therefore the new scheme is expected to be beneficial when combined with radiative calculations that are directly affected by condensation and precipitation. Examples are presented of applications of the new moist physics in 1D-Var retrievals using microwave brightness temperature measurements and in adjoint sensitivity experiments. Copyright © 2005 Royal Meteorological Society. [source] |