Extensive Simulations (extensive + simulation)

Distribution by Scientific Domains
Distribution within Engineering

Terms modified by Extensive Simulations

  • extensive simulation experiment
  • extensive simulation studies
  • extensive simulation study

  • Selected Abstracts


    Sliding,window neural state estimation in a power plant heater line

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2001
    A. Alessandri
    Abstract The state estimation problem for a section of a real power plant is addressed by means of a recently proposed sliding-window neural state estimator. The complexity and the nonlinearity of the considered application prevent us from successfully using standard techniques as Kalman filtering. The statistics of the distribution of the initial state and of noises are assumed to be unknown and the estimator is designed by minimizing a given generalized least-squares cost function. The following approximations are enforced: (i) the state estimator is a finite-memory one, (ii) the estimation functions are given fixed structures in which a certain number of parameters have to be optimized (multilayer feedforward neural networks are chosen from among various possible nonlinear approximators), (iii) the algorithms for optimizing the parameters (i.e., the network weights) rely on a stochastic approximation. Extensive simulation results on a complex model of a part of a real power plant are reported to compare the behaviour of the proposed estimator with the extended Kalman filter. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Robust adaptive fuzzy semi-decentralized control for a class of large-scale nonlinear systems using input,output linearization concept

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2010
    H. Yousef
    Abstract Stable direct and indirect adaptive fuzzy controllers based on input,output linearization concept are presented for a class of interconnected nonlinear systems with unknown nonlinear subsystems and interconnections. The interconnected nonlinear systems are represented not only in the canonical forms as in Yousef et al. (Int. J. Robust Nonlinear Control 2006; 16: 687,708) but also in the general forms. Hybrid adaptive fuzzy robust tracking control schemes that are based on a combination of an H, tracking theory and fuzzy control design are developed. In the proposed control schemes, all the states and signals are bounded and an H, tracking control performance is guaranteed without imposing any constraints or assumptions about the interconnections. Extensive simulation on the tracking of a two-link rigid robot manipulator and a numerical example verify the effectiveness of the proposed algorithms. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A New Navigation Method for an Automatic Guided Vehicle

    JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 3 2004
    Chen Wuwei
    This paper presents a new navigation method for an automatic guided vehicle (AGV). This method utilizes a new navigation and control scheme based on searching points on an arc. Safety measure indices are defined and are generated from the output of a fuzzy neural network which define the actions the AGV is to take when in the presence of obstacles. The proposed algorithm integrates several functions required for automatic guided vehicle navigation and tracking control and it exhibits satisfactory performance when maneuvering in complex environments. The automatic guided vehicle with this navigation control system not only can quickly process environmental information, but also can efficiently avoid dynamic or static obstacles, and reach targets safely and reliably. Extensive simulation and experimental results demonstrate the effectiveness and correct behavior of this scheme. © 2004 Wiley Periodicals, Inc. [source]


    Multi-Period Planning of Survivable WDM Networks

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2000
    Mario Pickavet
    This paper presents a new heuristic algorithm useful for long-term planning of survivable WDM networks. A multi-period model is formulated that combines network topology design and capacity expansion. The ability to determine network expansion schedules of this type becomes increasingly important to the telecommunications industry and to its customers. The solution technique consists of a Genetic Algorithm that allows to generate several network alternatives for each time period simultaneously and shortest-path techniques to deduce from these alternatives a least-cost network expansion plan over all time periods. The multi-period planning approach is illustrated on a realistic network example. Extensive simulations on a wide range of problem instances are carried out to assess the cost savings that can be expected by choosing a multi-period planning approach instead of an iterative network expansion design method. [source]


    Tightest constraint first: An efficient delay sensitive multicast routing algorithm

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2005
    Gang FengArticle first published online: 1 APR 200
    Abstract As a key issue in multicast routing with quality of service (QoS) support, constrained minimum Steiner tree (CMST) problem has been a research focus for more than a decade, and tens of heuristics have been developed to solve this NP-complete problem. Among all the previously proposed algorithms, the bounded shortest path algorithm (BSMA) (IEEE INFOCOM'95 1995; 1:377,385) have been proved to be capable of producing a multicast tree that has on average the lowest cost. However, such an excellent cost performance is accompanied with an extremely high time complexity. Recently, Feng et al. presented an alternative implementation of BSMA, which makes use of the latest research results on the delay-constrained least cost (DCLC) routing problem. Simulations indicate that, in comparison with the original implementation, the alternative implementation has a much lower time complexity with virtually identical cost performance, and it also runs much faster than many renowned heuristics such as KPP (IEEE/ACM Trans. Networking 1993; 1(3):286,292) and CAO (The design and evaluation of routing algorithms for real-time channels. Technical Report ICSI TR-94-024, International Computer Science Institute, University of California at Berkeley, June 1994). In this paper, we propose a brand new heuristic TCF, which is based on an idea called ,tightest constraint first.' TCF runs a DCLC heuristic only once for each destination and therefore has a provably low time complexity. We further propose an iterative heuristic ITCF, which uses TCF to obtain an initial tree and then gradually refines it. Extensive simulations demonstrate that, in the average sense, TCF can achieve a cost performance comparable to or better than that of BSMA, the cost performance of ITCF is even better than that of TCF, TCF runs approximately twice as fast as ITCF, and ITCF runs 2,4 times as fast as the best implementation of BSMA. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Parameter Estimation and Goodness-of-Fit in Log Binomial Regression

    BIOMETRICAL JOURNAL, Issue 1 2006
    L. Blizzard
    Abstract An estimate of the risk, adjusted for confounders, can be obtained from a fitted logistic regression model, but it substantially over-estimates when the outcome is not rare. The log binomial model, binomial errors and log link, is increasingly being used for this purpose. However this model's performance, goodness of fit tests and case-wise diagnostics have not been studied. Extensive simulations are used to compare the performance of the log binomial, a logistic regression based method proposed by Schouten et al. (1993) and a Poisson regression approach proposed by Zou (2004) and Carter, Lipsitz, and Tilley (2005). Log binomial regression resulted in "failure" rates (non-convergence, out-of-bounds predicted probabilities) as high as 59%. Estimates by the method of Schouten et al. (1993) produced fitted log binomial probabilities greater than unity in up to 19% of samples to which a log binomial model had been successfully fit and in up to 78% of samples when the log binomial model fit failed. Similar percentages were observed for the Poisson regression approach. Coefficient and standard error estimates from the three models were similar. Rejection rates for goodness of fit tests for log binomial fit were around 5%. Power of goodness of fit tests was modest when an incorrect logistic regression model was fit. Examples demonstrate the use of the methods. Uncritical use of the log binomial regression model is not recommended. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Cox Regression in Nested Case,Control Studies with Auxiliary Covariates

    BIOMETRICS, Issue 2 2010
    Mengling Liu
    Summary Nested case,control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor. [source]


    An analytic model for the behavior of arbitrary peer-to-peer networks

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2004
    Rüdiger Schollmeier
    We present an analytic approach to better understand and design peer-to-peer (P2P) networks, which are becoming increasingly important, as they contribute an amount of traffic often dominating the total traffic in the internet even today. We start from a graph theoretical approach for infinite random networks and enhance that to include the effects of a finite network. Our approach is valid for an arbitrary degree distribution in the network and thus avoids the need for extensive simulation, which would otherwise be required to evaluate the performance of a specific P2P protocol. Our analytic approach can thus be used for improved tailoring of P2P communication as well as to minimize the often excessive signaling traffic. Copyright © 2004 AEI [source]


    A new distributed approximation algorithm for constructing minimum connected dominating set in wireless ad hoc networks

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2005
    Bo Gao
    Abstract In recent years, constructing a virtual backbone by nodes in a connected dominating set (CDS) has been proposed to improve the performance of ad hoc wireless networks. In general, a dominating set satisfies that every vertex in the graph is either in the set or adjacent to a vertex in the set. A CDS is a dominating set that also induces a connected sub-graph. However, finding the minimum connected dominating set (MCDS) is a well-known NP-hard problem in graph theory. Approximation algorithms for MCDS have been proposed in the literature. Most of these algorithms suffer from a poor approximation ratio, and from high time complexity and message complexity. In this paper, we present a new distributed approximation algorithm that constructs a MCDS for wireless ad hoc networks based on a maximal independent set (MIS). Our algorithm, which is fully localized, has a constant approximation ratio, and O(n) time and O(n) message complexity. In this algorithm, each node only requires the knowledge of its one-hop neighbours and there is only one shortest path connecting two dominators that are at most three hops away. We not only give theoretical performance analysis for our algorithm, but also conduct extensive simulation to compare our algorithm with other algorithms in the literature. Simulation results and theoretical analysis show that our algorithm has better efficiency and performance than others. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Hybrid prioritized multiple access protocols for bank LANs

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2004
    M. Sklira
    Abstract Bank communication networks support four classes of traffic: Alarm, BSC, SNA and IP traffic, with each class of traffic having different priority requirements. In this paper, a framework for the design of multiple access protocols which are capable of handling the above priority classes is introduced. Furthermore, a hybrid multiple access protocol that has been designed according to the proposed framework is presented and evaluated by means of extensive simulation results. The proposed protocol, is applicable to a broad range of prioritized LANs. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Controlling the False Discovery Rate with Constraints: The Newman-Keuls Test Revisited

    BIOMETRICAL JOURNAL, Issue 1 2007
    Juliet Popper Shaffer
    Abstract The Newman-Keuls (NK) procedure for testing all pairwise comparisons among a set of treatment means, introduced by Newman (1939) and in a slightly different form by Keuls (1952) was proposed as a reasonable way to alleviate the inflation of error rates when a large number of means are compared. It was proposed before the concepts of different types of multiple error rates were introduced by Tukey (1952a, b; 1953). Although it was popular in the 1950s and 1960s, once control of the familywise error rate (FWER) was accepted generally as an appropriate criterion in multiple testing, and it was realized that the NK procedure does not control the FWER at the nominal level at which it is performed, the procedure gradually fell out of favor. Recently, a more liberal criterion, control of the false discovery rate (FDR), has been proposed as more appropriate in some situations than FWER control. This paper notes that the NK procedure and a nonparametric extension controls the FWER within any set of homogeneous treatments. It proves that the extended procedure controls the FDR when there are well-separated clusters of homogeneous means and between-cluster test statistics are independent, and extensive simulation provides strong evidence that the original procedure controls the FDR under the same conditions and some dependent conditions when the clusters are not well-separated. Thus, the test has two desirable error-controlling properties, providing a compromise between FDR control with no subgroup FWER control and global FWER control. Yekutieli (2002) developed an FDR-controlling procedure for testing all pairwise differences among means, without any FWER-controlling criteria when there is more than one cluster. The empirical example in Yekutieli's paper was used to compare the Benjamini-Hochberg (1995) method with apparent FDR control in this context, Yekutieli's proposed method with proven FDR control, the Newman- Keuls method that controls FWER within equal clusters with apparent FDR control, and several methods that control FWER globally. The Newman-Keuls is shown to be intermediate in number of rejections to the FWER-controlling methods and the FDR-controlling methods in this example, although it is not always more conservative than the other FDR-controlling methods. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Analysis of Twin Data Using SAS

    BIOMETRICS, Issue 2 2009
    Rui Feng
    Summary Twin studies are essential for assessing disease inheritance. Data generated from twin studies are traditionally analyzed using specialized computational programs. For many researchers, especially those who are new to twin studies, understanding and using those specialized computational programs can be a daunting task. Given that SAS (Statistical Analysis Software) is the most popular software for statistical analysis, we suggest that the use of SAS procedures for twin data may be a helpful alternative and demonstrate that we can obtain similar results from SAS to those produced by specialized computational programs. This numerical validation is practically useful, because a natural concern with general statistical software is whether it can deal with data that are generated from special study designs such as twin studies and if it can test a particular hypothesis. We concluded through our extensive simulation that SAS procedures can be used easily as a very convenient alternative to specialized programs for twin data analysis. [source]


    The Neutralizer: a self-configurable failure detector for minimizing distributed storage maintenance cost

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2009
    Zhi Yang
    Abstract To achieve high data availability or reliability in an efficient manner, distributed storage systems must detect whether an observed node failure is permanent or transient, and if necessary, generate replicas to restore the desired level of replication. Given the unpredictability of network dynamics, however, distinguishing permanent and transient failures is extremely difficult. Though timeout-based detectors can be used to avoid mistaking transient failures as permanent failures, it is unknown how the timeout values should be selected to achieve a better tradeoff between detection latency and accuracy. In this paper, we address this fundamental tradeoff from several perspectives. First, we explore the impact of different timeout values on maintenance cost by examining the probability of their false positives and false negatives. Second, we propose a self-configurable failure detector called the Neutralizer based on the idea of counteracting false positives with false negatives. The Neutralizer could enable the system to maintain a desired replication level on average with the least amount of bandwidth. We conduct extensive simulations using real trace data from a widely deployed peer-to-peer system and synthetic traces based on PlanetLab and Microsoft PCs, showing a significant reduction in aggregate bandwidth usage after applying the Neutralizer (especially in an environment with a low average node availability). Overall, we demonstrate that the Neutralizer closely approximates the performance of a perfect ,oracle' detector in many cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Threshold-based admission control for a multimedia Grid: analysis and performance evaluation

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2006
    Yang Zhang
    Abstract In a Grid-based services system facing a large number of requests with different services and profits significance, there is always a trade-off between the system profits and the Quality of Service (QoS). In such systems, admission control plays an important role: the system has to employ a proper strategy to make admission control decisions and reserve resources for the coming requests thus to achieve greater profits without violating the QoS of the requests already admitted. In this paper, we introduce three essential admission control strategies with threshold on resource reservation and a newly proposed strategy with layered threshold. Through comprehensive theoretical analyses and extensive simulations, we demonstrate that the strategy with layered threshold is more efficient and flexible than the existing strategies for Grid-based multimedia services systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Time-distributed effect of exposure and infectious outbreaks

    ENVIRONMETRICS, Issue 3 2009
    Elena N. Naumova
    Abstract Extreme weather affects the timing and intensity of infectious outbreaks, the resurgence and redistribution of infections, and it causes disturbances in human-environment interactions. Environmental stressors with high thermoregulatory demands require susceptible populations to undergo physiological adaptive processes potentially compromising immune function and increasing susceptibility to infection. In assessing associations between environmental exposures and infectious diseases, failure to account for a latent period between time of exposure and time of disease manifestation may lead to severe underestimation of the effects. In a population, health effects of an episode of exposure are distributed over a range of time lags. To consider such time-distributed lags is a challenging task given that the length of a latent period varies from hours to months and depends on the type of pathogen, individual susceptibility to the pathogen, dose of exposure, route of transmission, and many other factors. The two main objectives of this communication are to introduce an approach to modeling time-distributed effect of exposures to infection cases and to demonstrate this approach in an analysis of the association between high ambient temperature and daily incidence of enterically transmitted infections. The study is supplemented with extensive simulations to examine model sensitivity to response magnitude, exposure frequency, and extent of latent period. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Bivariate association analyses for the mixture of continuous and binary traits with the use of extended generalized estimating equations

    GENETIC EPIDEMIOLOGY, Issue 3 2009
    Jianfeng Liu
    Abstract Genome-wide association (GWA) study is becoming a powerful tool in deciphering genetic basis of complex human diseases/traits. Currently, the univariate analysis is the most commonly used method to identify genes associated with a certain disease/phenotype under study. A major limitation with the univariate analysis is that it may not make use of the information of multiple correlated phenotypes, which are usually measured and collected in practical studies. The multivariate analysis has proven to be a powerful approach in linkage studies of complex diseases/traits, but it has received little attention in GWA. In this study, we aim to develop a bivariate analytical method for GWA study, which can be used for a complex situation in which continuous trait and a binary trait are measured under study. Based on the modified extended generalized estimating equation (EGEE) method we proposed herein, we assessed the performance of our bivariate analyses through extensive simulations as well as real data analyses. In the study, to develop an EGEE approach for bivariate genetic analyses, we combined two different generalized linear models corresponding to phenotypic variables using a seemingly unrelated regression model. The simulation results demonstrated that our EGEE-based bivariate analytical method outperforms univariate analyses in increasing statistical power under a variety of simulation scenarios. Notably, EGEE-based bivariate analyses have consistent advantages over univariate analyses whether or not there exists a phenotypic correlation between the two traits. Our study has practical importance, as one can always use multivariate analyses as a screening tool when multiple phenotypes are available, without extra costs of statistical power and false-positive rate. Analyses on empirical GWA data further affirm the advantages of our bivariate analytical method. Genet. Epidemiol. 2009. © 2008 Wiley-Liss, Inc. [source]


    Modelling of source-coupled logic gates

    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2002
    M. Alioto
    Abstract In this paper, the modelling of CMOS SCL gates is addressed. The topology both with and without output buffer is treated, and the noise margin as well as propagation delay performance are analytically derived, using standard BSIM3v3 model parameters. The propagation delay model of a single SCL gate is based on proper linearization of the circuit and the assumption of a single-pole behaviour. To generalize the results obtained to cascaded gates, the effect of the input rise time and the loading effect of an SCL gate are discussed. The expressions obtained are simple enough to be used for pencil-and-paper evaluations and are helpful from the early design phases, as they relate SCL gates performance to design and process parameters, allowing the designer to gain an intuitive understanding of performance dependence on design parameters and technology. The model has been validated by comparison with extensive simulations using a 0.35-µm CMOS process. The model agrees well with the simulated results, since in realistic cases the difference is less than 20% both for noise margin and delay. Therefore, the model proposed can be profitably used for pencil-and-paper evaluations and for computer-based timing analysis of complex SCL circuits. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Architecture design, performance analysis and VLSI implementation of a reconfigurable shared buffer for high-speed switch/router,

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2009
    Ling Wu
    Abstract Modern switches and routers require massive storage space to buffer packets. This becomes more significant as link speed increases and switch size grows. From the memory technology perspective, while DRAM is a good choice to meet capacity requirement, the access time causes problems for high-speed applications. On the other hand, though SRAM is faster, it is more costly and does not have high storage density. The SRAM/DRAM hybrid architecture provides a good solution to meet both capacity and speed requirements. From the switch design and network traffic perspective, to minimize packet loss, the buffering space allocated for each switch port is normally based on the worst-case scenario, which is usually huge. However, under normal traffic load conditions, the buffer utilization for such configuration is very low. Therefore, we propose a reconfigurable buffer-sharing scheme that can dynamically adjust the buffering space for each port according to the traffic patterns and buffer saturation status. The target is to achieve high performance and improve buffer utilization, while not posing much constraint on the buffer speed. In this paper, we study the performance of the proposed buffer-sharing scheme by both a numerical model and extensive simulations under uniform and non-uniform traffic conditions. We also present the architecture design and VLSI implementation of the proposed reconfigurable shared buffer using the 0.18 µm CMOS technology. Our results manifest that the proposed architecture can always achieve high performance and provide much flexibility for the high-speed packet switches to adapt to various traffic patterns. Furthermore, it can be easily integrated into the functionality of port controllers of modern switches and routers. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Efficient Delaunay-based localized routing for wireless sensor networks

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2007
    Yu Wang
    Abstract Consider a wireless sensor network consisting of n wireless sensors randomly distributed in a two-dimensional plane. In this paper, we show that with high probability we can locally find a path for any pair of sensors such that the length of the path is no more than a constant factor of the minimum. By assuming each sensor knows its position, our new routing method decides where to forward the message purely based on the position of current node, its neighbours, and the positions of the source and the target. Our method is based on a novel structure called localized Delaunay triangulation and a geometric routing method that guarantees that the distance travelled by the packets is no more than a small constant factor of the minimum when the Delaunay triangulation of sensor nodes are known. Our experiments show that the delivery rates of existing localized routing protocols are increased when localized Delaunay triangulation is used instead of several previously proposed topologies, and our localized routing protocol based on Delaunay triangulation works well in practice. We also conducted extensive simulations of another localized routing protocol, face routing. The path found by this protocol is also reasonably good compared with previous one although it cannot guarantee a constant approximation on the length of the path travelled theoretically. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Wireless video streaming with TCP and simultaneous MAC packet transmission (SMPT),

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2004
    Frank H. P. Fitzek
    Abstract Video streaming is expected to account for a large portion of the traffic in future networks, including wireless networks. It is widely accepted that the user datagram protocol (UDP) is the preferred transport protocol for video streaming and that the transmission control protocol (TCP) is unsuitable for streaming. The widespread use of UDP, however, has a number of drawbacks, such as unfairness and possible congestion collapse, which are avoided by TCP. In this paper we investigate the use of TCP as the transport layer protocol for streaming video in a multi-code CDMA cellular wireless system. Our approach is to stabilize the TCP throughput over the wireless links by employing a recently developed simultaneous MAC packet transmission (SMPT) approach at the link layer. We study the capacity, i.e. the number of customers per cell, and the quality of service for streaming video in the uplink direction. Our extensive simulations indicate that streaming over TCP in conjunction with SMPT gives good performance for video encoded in a closed loop, i.e. with rate control. We have also found that TCP is unsuitable (even in conjunction with SMPT) for streaming the more variable open-loop encoded video. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    WTCP: an efficient mechanism for improving wireless access to TCP services

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2003
    Karunaharan Ratnam
    Abstract The transmission control protocol (TCP) has been mainly designed assuming a relatively reliable wireline network. It is known to perform poorly in the presence of wireless links because of its basic assumption that any loss of a data segment is due to congestion and consequently it invokes congestion control measures. However, on wireless access links, a large number of segment losses will occur more often because of wireless link errors or host mobility. For this reason, many proposals have recently appeared to improve TCP performance in such environment. They usually rely on the wireless access points (base stations) to locally retransmit the data in order to hide wireless losses from TCP. In this paper, we present Wireless-TCP (WTCP), a new mechanism for improving wireless access to TCP services. We use extensive simulations to evaluate TCP performance in the presence of congestion and wireless losses when the base station employs WTCP, and the well-known Snoop proposal (A comparison of mechanisms for improving TCP performance in wireless networks. In ACM SIGCOMM Symposium on Communication, Architectures and Protocols, August 1996). Our results show that WTCP significantly improves the throughput of TCP connections due to its unique feature of hiding the time spent by the base station to locally recover from wireless link errors so that TCPs round trip time estimation at the source is not affected. This proved to be critical since otherwise the ability of the source to effectively detect congestion in the fixed wireline network is hindered. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Simulation analyses of weighted fair bandwidth-on-demand (WFBoD) process for broadband multimedia geostationary satellite systems

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2005
    Güray Açar
    Abstract Advanced resource management schemes are required for broadband multimedia satellite networks to provide efficient and fair resource allocation while delivering guaranteed quality of service (QoS) to a potentially very large number of users. Such resource management schemes must provide well-defined service segregation to the different traffic flows of the satellite network, and they must be integrated with some connection admission control (CAC) process at least for the flows requiring QoS guarantees. Weighted fair bandwidth-on-demand (WFBoD) is a resource management process for broadband multimedia geostationary (GEO) satellite systems that provides fair and efficient resource allocation coupled with a well-defined MAC-level QoS framework (compatible with ATM and IP QoS frameworks) and a multi-level service segregation to a large number of users with diverse characteristics. WFBoD is also integrated with the CAC process. In this paper, we analyse via extensive simulations the WFBoD process in a bent-pipe satellite network. Our results show that WFBoD can be used to provide guaranteed QoS for both non-real-time and real-time variable bit rate (VBR) flows. Our results also show how to choose the main parameters of the WFBoD process depending on the system parameters and on the traffic characteristics of the flows. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    The performance of QoS-aware IP multicast routing protocols,

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 2 2003
    Chih-Jen Tseng
    Abstract Research in the area of QoS-aware dynamic multicast routing protocols has been very active in recent years. Protocols based on dynamic Steiner tree strategies, such as YAM and QoSMIC, have been consistently shown to outperform those based on shortest path heuristics, such as PIM and DVMRP. However, these protocols all suffer from the problem of poor scalability for one or more of the following reasons: high control overhead, insufficient robustness with the adoption of a centralized group manager, and excessively long join latency. In addition, these protocols perform well only when group members are either densely populated or sparsely populated, but, unfortunately, not both. In this paper, we propose a protocol, named DSDMR, which can adapt its strategy based on sensed group member densities. Underlying DSDMR is an adaptive two-direction join mechanism that tries to find good attaching points for new group members either from the source or from the new joining member depending on member densities. We evaluate our scheme using extensive simulations and found that DSDMR can build multicast trees with costs close to the best greedy strategy, very low control overhead, and very short join latency across a wide member density spectrum. Furthermore, its success ratio is only slightly lower than is the best greedy strategy in finding feasible routes subject to both bandwidth and end-to-end delay constraints. © 2003 Wiley Periodicals, Inc. [source]


    Estimation methods for time-dependent AUC models with survival data

    THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2010
    Hung Hung
    Abstract The performance of clinical tests for disease screening is often evaluated using the area under the receiver-operating characteristic (ROC) curve (AUC). Recent developments have extended the traditional setting to the AUC with binary time-varying failure status. Without considering covariates, our first theme is to propose a simple and easily computed nonparametric estimator for the time-dependent AUC. Moreover, we use generalized linear models with time-varying coefficients to characterize the time-dependent AUC as a function of covariate values. The corresponding estimation procedures are proposed to estimate the parameter functions of interest. The derived limiting Gaussian processes and the estimated asymptotic variances enable us to construct the approximated confidence regions for the AUCs. The finite sample properties of our proposed estimators and inference procedures are examined through extensive simulations. An analysis of the AIDS Clinical Trials Group (ACTG) 175 data is further presented to show the applicability of the proposed methods. The Canadian Journal of Statistics 38:8,26; 2010 © 2009 Statistical Society of Canada La performance des tests cliniques pour le dépistage de maladie est souvent évaluée en utilisant l'aire sous la courbe caractéristique de fonctionnements du récepteur (, ROC , ), notée , AUC , . Des développements récents ont généralisé le cadre traditionnel à l'AUC avec un statut de panne binaire variant dans le temps. Sans considérer les covariables, nous commençons par proposer un estimateur non paramétrique pour l'AUC simple et facile à calculer. De plus, nous utilisons des modèles linéaires généralisés avec des coefficients dépendant du temps pour caractériser les AUC, dépendant du temps, comme fonction des covariables. Les procédures d'estimation asociées correspondantes sont proposées afin d'estimer les fonctions paramètres d'intérêt. Les processus gaussiens limites sont obtenus ainsi que les variances asymptotiques estimées afin de construire des régions de confiance approximatives pour les AUC. À l'aide de nombreuses simulations, les propriétés pour de petits échantillons des estimateurs proposés et des procédures d'inférence sont étudiées. Une analyse du groupe d'essais cliniques sur le sida 175 (ACTG 175) est aussi présentée afin de montrer l'applicabilité des méthodes proposées. La revue canadienne de statistique 38: 8,26; 2010 © 2009 Société statistique du Canada [source]


    Design of nonlinear terminal guidance/autopilot controller for missiles with pulse type input devices,

    ASIAN JOURNAL OF CONTROL, Issue 3 2010
    Fu-Kuang Yeh
    Abstract This investigation addresses a nonlinear terminal guidance/autopilot controller with pulse-type control inputs for intercepting a theater ballistic missile in the exoatmospheric region. Appropriate initial conditions on the terminal phase are assumed to apply after the end of the midcourse operation. Accordingly, the terminal controller seeks to minimize the distance between the commanded missile and the target missile to ensure a hit-to-kill interception. In particular, a 3D terminal guidance law is initially developed to eliminate the so-called "sliding velocity, " thus, constraining the relative motion between the missile and the target along the line of sight. Sliding mode control is adopted to design stable pulse-type control systems. Then, a quaternion-based attitude controller is used to orient appropriately the commanded missile, taking into account the fact that the missile is a rigid body, to realize interceptability. The stability of the overall integrated terminal guidance/autopilot system is then analyzed thoroughly, based on Lyapunov stability theory. Finally, extensive simulations are conducted to verify the validity and effectiveness of the integrated controller with the pulse type inputs developed herein. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]


    Gaussian Process Based Bayesian Semiparametric Quantitative Trait Loci Interval Mapping

    BIOMETRICS, Issue 1 2010
    Hanwen Huang
    Summary In linkage analysis, it is often necessary to include covariates such as age or weight to increase power or avoid spurious false positive findings. However, if a covariate term in the model is specified incorrectly (e.g., a quadratic term misspecified as a linear term), then the inclusion of the covariate may adversely affect power and accuracy of the identification of quantitative trait loci (QTL). Furthermore, some covariates may interact with each other in a complicated fashion. We implement semiparametric models for single and multiple QTL mapping. Both mapping methods include an unspecified function of any covariate found or suspected to have a more complex than linear but unknown relationship with the response variable. They also allow for interactions among different covariates. This analysis is performed in a Bayesian inference framework using Markov chain Monte Carlo. The advantages of our methods are demonstrated via extensive simulations and real data analysis. [source]


    A Comparison of Eight Methods for the Dual-Endpoint Evaluation of Efficacy in a Proof-of-Concept HIV Vaccine Trial

    BIOMETRICS, Issue 3 2006
    Devan V. Mehrotra
    Summary To support the design of the world's first proof-of-concept (POC) efficacy trial of a cell-mediated immunity-based HIV vaccine, we evaluate eight methods for testing the composite null hypothesis of no-vaccine effect on either the incidence of HIV infection or the viral load set point among those infected, relative to placebo. The first two methods use a single test applied to the actual values or ranks of a burden-of-illness (BOI) outcome that combines the infection and viral load endpoints. The other six methods combine separate tests for the two endpoints using unweighted or weighted versions of the two-part z, Simes', and Fisher's methods. Based on extensive simulations that were used to design the landmark POC trial, the BOI methods are shown to have generally low power for rejecting the composite null hypothesis (and hence advancing the vaccine to a subsequent large-scale efficacy trial). The unweighted Simes' and Fisher's combination methods perform best overall. Importantly, this conclusion holds even after the test for the viral load component is adjusted for bias that can be introduced by conditioning on a postrandomization event (HIV infection). The adjustment is derived using a selection bias model based on the principal stratification framework of causal inference. [source]