Minimum Number (minimum + number)

Distribution by Scientific Domains


Selected Abstracts


Locating a Surveillance Infrastructure in and Near Ports or on Other Planar Surfaces to Monitor Flows

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2010
Pitu B. Mirchandani
This article addresses the problem of locating surveillance radars to cover a given target surface that may have barriers through which radar signals cannot penetrate. The area of coverage of a radar is assumed to be a disc, or a partial disc when there are barriers, with a known radius. The article shows that the corresponding location problems relate to two well studied problems: the set-covering model and the maximal covering problem. In the first problem, the minimum number of radars is to be located to completely cover the target area; in the second problem a given number M of radars are to be located to cover the target area as much as possible. Based on a discrete representation of the target area, a Lagrangian heuristic and a two-stage procedure with a conquer-and-divide scaling are developed to solve the above two models. The computational experiences reported demonstrate that the developed method solves well the radar location problems formulated here. [source]


Distributed end-host multicast algorithms for the Knowledge Grid

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2007
Wanqing Tu
Abstract The Knowledge Grid built on top of the peer-to-peer (P2P) network has been studied to implement scalable, available and sematic-based querying. In order to improve the efficiency and scalability of querying, this paper studies the problem of multicasting queries in the Knowledge Grid. An m -dimensional irregular mesh is a popular overlay topology of P2P networks. We present a set of novel distributed algorithms on top of an m -dimensional irregular mesh overlay for the short delay and low network resource consumption end-host multicast services. Our end-host multicast fully utilizes the advantages of an m -dimensional mesh to construct a two-layer architecture. Compared to previous approaches, the novelty and contribution here are: (1) cluster formation that partitions the group members into clusters in the lower layer where cluster consists of a small number of members; (2) cluster core selection that searches a core with the minimum sum of overlay hops to all other cluster members for each cluster; (3) weighted shortest path tree construction that guarantees the minimum number of shortest paths to be occupied by the multicast traffic; (4) distributed multicast routing that directs the multicast messages to be efficiently distributed along the two-layer multicast architecture in parallel, without a global control; the routing scheme enables the packets to be transmitted to the remote end hosts within short delays through some common shortest paths; and (5) multicast path maintenance that restores the normal communication once the membership alteration appears. Simulation results show that our end-host multicast can distributively achieve a shorter delay and lower network resource consumption multicast services as compared with some well-known end-host multicast systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Errors in the Interpretation of Mohs Histopathology Sections Over a 1-Year Fellowship

DERMATOLOGIC SURGERY, Issue 12 2008
MICHAEL E. MURPHY MD
BACKGROUND Errors can occur in the interpretation of Mohs histopathology sections. Errors in histology interpretation can lead to incomplete removal of cancer and cancer persistence or the unnecessary removal of uninvolved tissue. Extensive proctored training is necessary to reduce these errors to an absolute minimum level. OBJECTIVE To analyze and quantify the number of cases and the amount of time required to reach a satisfactory level of expertise in the reading and interpretation of Mohs histopathology. METHODS A single-institution pilot study was designed to track errors in the interpretation and mapping of Mohs histopathology sections. A Mohs surgery fellow independently preread Mohs cases and rendered his interpretation on the Mohs map. One of the Mohs program directors subsequently reviewed and corrected all cases. Errors were scored on a graded scale and tracked over the 1-year fellowship to determine the number of cases and amount of time necessary to reduce errors to a baseline minimal level. RESULTS One thousand four hundred ninety-one Mohs surgery cases were required to generate 1,347 pathology specimens for review and grading over 6 months of Mohs surgery fellowship before reducing errors to a minimum acceptable level of less than 1 critical error per 100 cases read. CONCLUSIONS The number of cases and time required to reduce errors in the interpretation of Mohs histology is substantial. Direct and immediate mentored correction of errors is essential for improvement. These results can act as a guide for Mohs surgery training programs to help determine the minimum number of directly proctored cases required to obtain expertise in this crucial component of Mohs surgery. [source]


A minimum sample size required from Schmidt hammer measurements

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 13 2009
Tomasz Niedzielski
Abstract The Schmidt hammer is a useful tool applied by geomorphologists to measure rock strength in field conditions. The essence of field application is to obtain a sufficiently large dataset of individual rebound values, which yields a meaningful numerical value of mean strength. Although there is general agreement that a certain minimum sample size is required to proceed with the statistics, the choice of size (i.e. number of individual impacts) was usually intuitive and arbitrary. In this paper we show a simple statistical method, based on the two-sample Student's t -test, to objectively estimate the minimum number of rebound measurements. We present the results as (1) the ,mean' and ,median' solutions, each providing a single estimate value, and (2) the empirical probability distribution of such estimates based on many field samples. Schmidt hammer data for 14 lithologies, 13,81 samples for each, with each sample consisting of 40 individual readings, have been evaluated, assuming different significance levels. The principal recommendations are: (1) the recommended minimum sample size for weak and moderately strong rock is 25; (2) a sample size of 15 is sufficient for sandstones and shales; (3) strong and coarse rocks require 30 readings at a site; (4) the minimum sample size may be reduced by one-third if the context of research allows for higher significance level for test statistics. Interpretations based on less than 10 readings from a site should definitely be avoided. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Optimization of Monte Carlo Procedures for Value at Risk Estimates

ECONOMIC NOTES, Issue 1 2002
Sabrina Antonelli
This paper proposes a methodology which improves the computational efficiency of the Monte Carlo simulation approach of value at risk (VaR) estimates. Principal components analysis is used to reduce the number of relevant sources of risk driving the portfolio dynamics. Moreover, large deviations techniques are used to provide an estimate of the minimum number of price scenarios to be simulated to attain a given accuracy. Numerical examples are provided and show the good performance of the methodolgy proposed. (J.E.L.: C15, G1). [source]


Integration of genotoxicity and population genetic analyses in kangaroo rats (Dipodomys merriami) exposed to radionuclide contamination at the Nevada Test Site, USA

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2001
Christopher W. Theodorakis
Abstract We examined effects of radionuclide exposure at two atomic blast sites on kangaroo rats (Dipodomys merriami) at the Nevada Test Site, Nevada, USA, using genotoxicity and population genetic analyses. We assessed chromosome damage by micronucleus and flow cytometric assays and genetic variation by randomly amplified polymorphic DNA (RAPD) and mitochondrial DNA (mtDNA) analyses. The RAPD analysis showed no population structure, but mtDNA exhibited differentiation among and within populations. Genotoxicity effects were not observed when all individuals were analyzed. However, individuals with mtDNA haplotypes unique to the contaminated sites had greater chromosomal damage than contaminated-site individuals with haplotypes shared with reference sites. When interpopulation comparisons used individuals with unique haplotypes, one contaminated site had greater levels of chromosome damage than one or both of the reference sites. We hypothesize that shared-haplotype individuals are potential migrants and that unique-haplotype individuals are potential long-term residents. A parsimony approach was used to estimate the minimum number of migration events necessary to explain the haplotype distributions on a phylogenetic tree. The observed predominance of migration events into the contaminated sites supported our migration hypothesis. We conclude the atomic blast sites are ecological sinks and that immigration masks the genotoxic effects of radiation on the resident populations. [source]


Optimal measurement placement for security constrained state estimation using hybrid genetic algorithm and simulated annealing

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2009
T. Kerdchuen
Abstract This paper proposes a hybrid genetic algorithm and simulated annealing (HGS) for solving optimal measurement placement for power system state estimation. Even though the minimum number of measurement pairs is N considering the single measurement loss, their positions are required to make the system observable. HGS algorithm is a genetic algorithm (GA) using the acceptance criterion of simulated annealing (SA) for chromosome selection. The P, observable concept is used to check the network observability with and without single measurement pair loss contingency and single branch outage. Test results of 10-bus, IEEE 14, 30, 57, and 118-bus systems indicate that HGS is superior to tabu search (TS), GA, and SA in terms of higher frequency of the best hit and faster computational time. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Survivable wavelength-routed optical network design using genetic algorithms

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2008
Y. S. Kavian
The provision of acceptable service in the presence of failures and attacks is a major issue in the design of next generation dense wavelength division multiplexing (DWDM) networks. Survivability is provided by the establishment of spare lightpaths for each connection request to protect the working lightpaths. This paper presents a genetic algorithm (GA) solver for the routing and wavelength assignment problem with working and spare lightpaths to design survivable optical networks in the presence of a single link failure. Lightpaths are encoded into chromosomes made up of a fixed number of genes equal to the number of entries in the traffic demand matrix. Each gene represents one valid path and is thus coded as a variable length binary string. After crossover and mutation, each member of the population represents a set of valid but possibly incompatible paths and those that do not satisfy the problem constraints are discarded. The best paths are then found by use of a fitness function and these are assigned the minimum number of wavelengths according to the problem constraints. The proposed approach has been evaluated on dedicated path protection and shared path protection. Simulation results show that the GA method is efficient and able to design DWDM survivable real-world optical mesh networks. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Kinetic and crystallographic analysis of complexes formed between elastase and peptides from ,-casein

FEBS JOURNAL, Issue 10 2001
Penny A. Wright
Human ,-casomorphin-7 (NH2 -Tyr-Pro-Phe-Val-Glu-Pro-Ile-CO2H) is a naturally occurring peptide inhibitor of elastase that has been shown to form an acyl-enzyme complex stable enough for X-ray crystallographic analysis at pH 5. To investigate the importance of the N-terminal residues of the ,-casomorphin-7 peptide for the inhibition of elastase, kinetic and crystallographic analyses were undertaken to identify the minimum number of residues required for effective formation of a stable complex between truncated ,-casomorphin-7 peptides and porcine pancreatic elastase (PPE). The results clearly demonstrate that significant inhibition of PPE can be effected by simple tri-, tetra-and pentapeptides terminating in a carboxylic acid. These results also suggest that in vivo regulation of protease activity could be mediated via short peptides as well as by proteins. Crystallographic analysis of the complex formed between N -acetyl-Val-Glu-Pro-Ile-CO2H and PPE at pH 5 (to 1.67 Å resolution) revealed an active site water molecule in an analogous position to that observed in the PPE/,-casomorphin-7 structure supportive of its assignment as the ,hydrolytic water' in the deacylation step of serine protease catalysis. [source]


Genetic immunity and influenza pandemics

FEMS IMMUNOLOGY & MEDICAL MICROBIOLOGY, Issue 1 2006
Sergey N. Rumyantsev
Abstract In addition to the great number of publications focused on the leading role of virus mutations and reassortment in the origin of pandemic influenza, general opinion emphasizes the victim side of the epidemic process. Based on the analysis and integration of relevant ecological, epidemiological, clinical, genetic and experimental data, the present article is focused on the evolution of ,virus , victim' ecological systems resulting in the formation of innate (i.e. genetic, constitutional) immunity in the involved species and populations. This kind of immunity functions today as the greatest natural barrier to the pandemic spread of influenza among humans and ecologically related kinds of animals. Global influenza pandemics can arise when the worldwide population contains at least a minimum number of people susceptible to a known or mutant influenza virus. Special attention is paid in this article to individual tests for the presence of this barrier, including the implications of specific findings for public health policy. Such tests could be based on in vitro observation of the action of relevant virus strains on primary cell cultures or on their cellular or molecular components extracted from individuals. The resources of the Human Genome Project should also be utilized. [source]


How good are the Electrodes we use in PEFC?

FUEL CELLS, Issue 3 2004
M. Eikerling
Abstract Basically, companies and laboratories implement production methods for their electrodes on the basis of experience, technical capabilities and commercial preferences. But how does one know whether they have ended up with the best possible electrode for the components used? What should be the (i) optimal thickness of the catalyst layer? (ii) relative amounts of electronically conducting component (catalyst, with support , if used), electrolyte and pores? (iii) "particle size distributions" in these mesophases? We may be pleased with our MEAs, but could we make them better? The details of excellently working MEA structures are typically not a subject of open discussion, also hardly anyone in the fuel cell business would like to admit that their electrodes could have been made much better. Therefore, we only rarely find (far from systematic) experimental reports on this most important issue. The message of this paper is to illustrate how strongly the MEA morphology could affect the performance and to pave the way for the development of the theory. Full analysis should address the performance at different current densities, which is possible and is partially shown in this paper, but vital trends can be demonstrated on the linear polarization resistance, the signature of electrode performance. The latter is expressed through the minimum number of key parameters characterizing the processes taking place in the MEA. Model expressions of the percolation theory can then be used to approximate the dependence on these parameters. The effects revealed are dramatic. Of course, the corresponding curves will not be reproduced literally in experiments, since these illustrations use crude expressions inspired by the theory of percolation on a regular lattice, whereas the actual mesoscopic architecture of MEA is much more complicated. However, they give us a flavour of reserves that might be released by smart MEA design. [source]


A Probabilistic Method for Estimating Monitoring Point Density for Containment System Leak Detection

GROUND WATER, Issue 4 2000
Randall R. Ross
The use of physical and hydraulic containment systems for the isolation of contaminated ground water and aquifer materials associated with hazardous waste sites has increased during the last decade. The existing methodologies for monitoring and evaluating leakage from hazardous waste containment systems rely primarily on limited hydraulic head data. The number of hydraulic head monitoring points available at most sites employing physical containment systems may be insufficient to identify significant leaks from the systems. A probabilistic approach for evaluating the performance of containment systems, based on estimations of apparent leakage rates, is used to introduce a methodology for determining the minimum number of monitoring points necessary to identify the hydraulic signature of leakage from a containment system. The probabilistic method is based on the principles of geometric probability. The method is demonstrated using three-dimensional ground water flow modeling results of leakage through a vertical barrier. The results indicate that the monitoring point spacing used at many hazardous waste sites likely is inadequate to detect the hydraulic signatures of all but the largest leaks. [source]


The impact of joint bleeding and synovitis on physical ability and joint function in a murine model of haemophilic synovitis

HAEMOPHILIA, Issue 1 2008
C. MEJIA-CARVAJAL
Summary., Haemophilia is a congenital disorder that commonly results in musculoskeletal bleeding and orthopaedic complications. After an acute joint haemorrhage, an increase in intra-articular pressure and inflammation cause pain, swelling and limited motion. Blood in the joint space provokes a proliferative disorder known as haemophilic synovitis. Overgrowth of the synovial membrane causes mechanical dysfunction. Eventually, there is destruction of the articular surface and underlying bone. The aim of this project was to test the hypothesis that a minimum number of haemarthroses negatively impacts on joint function and that this would be reflected by decreased physical performance of experimental animals. Mice deficient in factor VIII coagulant activity were trained to ambulate on a rotating rod then injured three times at weekly intervals. Their ability to walk was then compared to a group of uninjured mice. Cohorts of mice were killed after 1, 2 or 3 months and the knee joints examined by gross and histological methods. The results supported the following conclusions: (i) haemophilic mice can be trained to ambulate on a rotating rod; (ii) acute hemarthrosis temporarily impairs their ability to ambulate and (iii) following recovery from acute injury, mice developing synovitis demonstrated inferior physical ability compared to mice not developing synovitis. This is the first description of a quantitative assay to monitor joint function in experimental animals and should be useful to evaluate the efficacy of new therapies developed to prevent and treat bleeding and to test strategies to counter the devastating effects of synovitis. [source]


Rainfall network design using kriging and entropy

HYDROLOGICAL PROCESSES, Issue 3 2008
Yen-Chang Chen
Abstract The spatial distribution of rainfall is related to meteorological and topographical factors. An understanding of the weather and topography is required to select the locations of the rain gauge stations in the catchment to obtain the optimum information. In theory, a well-designed rainfall network can accurately represent and provide the needed information of rainfall in the catchment. However, the available rainfall data are rarely adequate in the mountainous area of Taiwan. In order to provide enough rainfall data to assure the success of water projects, the rainfall network based on the existing rain gauge stations has to be redesigned. A method composed of kriging and entropy that can determine the optimum number and spatial distribution of rain gauge stations in catchments is proposed. Kriging as an interpolator, which performs linear averaging to reconstruct the rainfall over the catchment on the basis of the observed rainfall, is used to compute the spatial variations of rainfall. Thus, the rainfall data at the locations of the candidate rain gauge stations can be reconstructed. The information entropy reveals the rainfall information of the each rain gauge station in the catchment. By calculating the joint entropy and the transmitted information, the candidate rain gauge stations are prioritized. In addition, the saturation of rainfall information can be used to add or remove the rain gauge stations. Thus, the optimum spatial distribution and the minimum number of rain gauge stations in the network can be determined. The catchment of the Shimen Reservoir in Taiwan is used to illustrate the method. The result shows that only seven rain gauge stations are needed to provide the necessary information. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Semi-blind fast equalization of QAM channels using concurrent gradient-Newton CMA and soft decision-directed scheme

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2010
S. Chen
Abstract This contribution considers semi-blind adaptive equalization for communication systems that employ high-throughput quadrature amplitude modulation signalling. A minimum number of training symbols, approximately equal to the dimension of the equalizer, are first utilized to provide a rough initial least-squares estimate of the equalizer's weight vector. A novel gradient-Newton concurrent constant modulus algorithm and soft decision-directed scheme are then applied to adapt the equalizer. The proposed semi-blind adaptive algorithm is capable of converging fast and accurately to the optimal minimum mean-square error equalization solution. Simulation results obtained demonstrate that the convergence speed of this semi-blind adaptive algorithm is close to that of the training-based recursive least-square algorithm. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Synthesis of general impedance with simple dc/dc converters for power processing applications

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2008
J. C. P. Liu
Abstract A general impedance synthesizer using a minimum number of switching converters is studied in this paper. We begin with showing that any impedance can be synthesized by a circuit consisting of only two simple power converters, one storage element (e.g. capacitor) and one dissipative element (e.g. resistor) or power source. The implementation of such a circuit for synthesizing any desired impedance can be performed by (i) programming the input current given the input voltage such that the desired impedance function is achieved, (ii) controlling the amount of power dissipation (generation) in the dissipative element (source) so as to match the required active power of the impedance to be synthesized. Then, the instantaneous power will be automatically balanced by the storage element. Such impedance synthesizers find a lot of applications in power electronics. For instance, a resistance synthesizer can be used for power factor correction (PFC), a programmable capacitor or inductor synthesizer (comprising small high-frequency converters) can be used for control applications. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A theory of tie-set graph and its application to information network management

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2001
Norihiko Shinomiya
Abstract This paper presents a new circuit theoretical concept based on the principal partition theorem for distributed network management focusing on loops of an information network. To realize a simple network management with the minimum number of local agents, namely the topological degrees of freedom of a graph, a reduced loop agent graph generated by contracting the minimal principal minor is proposed. To investigate the optimal distribution of the loop agents, a theory of tie-set graph is proposed. Considering the total processing load of loop agents, a complexity of a tie-set graph is introduced to obtain the simplest tie-set graph with the minimum complexity. As for the simplest tie-set graph search, an experimental result shows that the computational time depends heavily on the nullity of the original graph. Therefore, a tie-set graph with the smallest nullity is essential for network management. Copyright © 2001 John Wiley & Sons, Ltd. [source]


The influence of the tropical and subtropical Atlantic and Pacific Oceans on precipitation variability over Southern Central South America on seasonal time scales

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 4 2004
Guillermo J. Berri
Abstract This paper studies the temporal and spatial patterns of precipitation anomalies over southern central South America (SCSA; 22,40°S and 54,70°W), and their relationship with the sea-surface temperature (SST) variability over the surrounding tropical and subtropical Atlantic and Pacific Oceans. The data include monthly precipitation from 68 weather stations in central,northern Argentina and neighbouring Brazil, Paraguay and Uruguay, and monthly SSTs from the NOAA dataset with a 2° resolution for the period 1961,93. We use the method of canonical correlation analysis (CCA) to study the simultaneous relationship between bi-monthly precipitation and SST variability. Before applying the CCA procedure, standardized anomalies are calculated and a prefiltering is applied by means of an empirical orthogonal function (EOF) analysis. Thus, the CCA input consists of 10 EOF modes of SST and between 9 and 11 modes for precipitation and their corresponding principal components, which are the minimum number of modes necessary to explain at least 80% of the variance of the corresponding field. The results show that November,December presents the most robust association between the SST and SCSA precipitation variability, especially in northeastern Argentina and southern Brazil, followed by March,April and May,June. The period January,February, in contrast, displays a weak relationship with the oceans and represents a temporal minimum of oceanic influence during the summer semester. Based on the CCA maps, we identify the different oceanic and SCSA regions, the regional averages of SST and precipitation are calculated, and linear correlation analysis are conducted. The periods with greater association between the oceans and SCSA precipitation are November,December and May,June. During November,December, every selected region over SCSA reflects the influence of several oceanic regions, whereas during May,June only a few regions show a direct association with the oceans. The Pacific Ocean regions have a greater influence and are more widespread over SCSA; the Atlantic Ocean regions have an influence only over the northwestern and the southeastern parts of SCSA. In general, the relationship with the equatorial and tropical Atlantic and Pacific Oceans is of the type warm,wet/cold,dry, whereas the subtropical regions of both oceans show the opposite relationship, i.e. warm,dry/cold,wet. Copyright © 2004 Royal Meteorological Society [source]


Optimizing Patching-based multicast for video-on-demand in wireless mesh networks

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9-10 2010
Fei Xie
Abstract In this work, we study the application of video-on-demand (VoD) in wireless mesh networks (WMN), a next generation edge technology to provide broadband data access in residential, business and even city-wise networks. We adopt a Patching-based multicast technique to better utilize the bandwidth resources in the mesh network. We optimize the Patching-based multicast by addressing two critical problems, namely, the Minimum Cost Multicast Tree (MCMT) problem and the Maximum Benefit Multicast Group (MBMG) problem. The MCMT problem is to find a MCMT in the network. We show that finding such a tree in the WMN can be formulated as a graph theory problem, which is to find the tree with minimum number of non-leaf nodes, and which spans all the nodes in the multicast group. We further prove the problem is NP-hard and propose a fast greedy algorithm to accommodate the real-time feature of the VoD application. We solve the MBMG problem by minimizing the communication of a Patching group in the entire network. A Markov model is proposed to capture the growth of the multicast group in the WMN. Simulation study results validate the proposed solutions of the two problems. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A peer-to-peer IPTV service architecture for the IP multimedia subsystem

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6-7 2010
A. Bikfalvi
Abstract During these last years the Internet Protocol Television (IPTV) service and the different peer-to-peer (P2P) technologies have generated an increasing interest for the developers and the research community that find in them the solution to deal with the scalability problem of media streaming and reducing costs at the same time. However, despite of the benefits obtained in Internet-based applications and the growing deployment of commercial IPTV systems, there has been a little effort in combining them both. With the advent of the next-generation-network platforms such as the IP Multimedia Subsystem (IMS), which advocates for an open and inter-operable service infrastructure, P2P emerges as a possible solution in situations where the traditional streaming mechanisms are not possible or not economically feasible. In this paper, we propose an IPTV service architecture for the IMS that combines a centralized control layer and a distributed, P2P-like, media layer that relies on the IMS devices or peers located in the customers' premises to act as streaming forwarding nodes. We extend the existing IMS IPTV standardization work that has already been done in 3GPP and ETSI TISPAN in order to require a minimum number of architectural changes. The objective is to obtain a system with a similar performance to the one in currently deployed systems and with the flexibility of P2P. One of the main challenges is to achieve comparable response times to user actions such as changing and tuning into channels, as well as providing a fast recovery mechanism when streaming nodes leave. To accomplish this we introduce the idea of foster peers as peers having inactive multimedia sessions and reserved resources. These peers are on stand-by until their functionality is required and at that moment, they are able to accept downstream peers at short notice for events requiring urgent treatment like channel changing and recovery. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Performance analysis of a reuse partitioning technique for multi-channel cellular systems supporting elastic services,

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 3 2009
Gábor Fodor
Abstract For multi-cell systems employing intra-cell orthogonal communication channels, inter-cell interference mitigation techniques are expected to be one of the key radio resource management functions. In this paper we propose and analyze a simple reuse partitioning technique (with random and coordinated resource block allocation in neighbor cells) that is able to reduce inter-cell interference. We propose a model that is able to take into account that sessions dynamically enter and leave the system. Rigid sessions require a class-specific fixed number of resource blocks, while elastic sessions can enter the system if a minimum number of resources are allocated to them. In this rather general setting (and using the example of a system employing frequency division for multiple access) we analyze the system performance in terms of the expected number of channel collisions, the session-blocking probabilities, the signal-to-interference-and-noise ratio (SINR) and packet error rate performance. We present numerical results on the various trade-offs between these measures (including the trade-off between the reuse factor and the SINR performance) that provide insight into the behavior of multi-channel cellular systems and help dimensionalize the parameters of a reuse partitioned system. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Blocking performance of fixed-paths least-congestion routing in multifibre WDM networks

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2-3 2002
Ling Li
Abstract Wavelength-routed all-optical networks have been receiving significant attention for high-capacity transport applications. Good routing and wavelength assignment (RWA) algorithms are critically important in order to improve the performance of wavelength-routed WDM networks. Multifibre WDM networks, in which each link consists of multiple fibres and each fibre carries information on multiple wavelengths, offer the advantage of reducing the effect of the wavelength continuity constraint without using wavelength converters. A wavelength that cannot continue on the next hop on the same fibre can be switched to another fibre using an optical cross-connect (OXC) if the same wavelength is free on one of the other fibres. However, the cost of a multifibre network is likely to be higher than a single-fibre network with the same capacity, because more amplifiers and multiplexers/demultiplexers may be required. The design goal of a multifibre network is to achieve a high network performance with the minimum number of fibres. In this paper, we study the blocking performance of fixed-paths least-congestion (FPLC) routing in multifibre WDM networks. A new analytical model with the consideration of link-load correlation is developed to evaluate the blocking performance of the FPLC routing. The analytical model is a generalized model that can be used in both regular (e.g. mesh-torus) and irregular (e.g. NSFnet) networks. It is shown that the analytical results closely match the simulation results, which indicate that the model is adequate in analytically predicting the performance of the FPLC routing in different networks. Two FPLC routing algorithms, wavelength trunk (WT)-based FPLC and lightpath (LP)-based FPLC, are developed and studied. Our analytical and simulation results show that the LP-based FPLC routing algorithm can use multiple fibres more efficiently than the WT-based FPLC and the alternate path routing. In both the mesh-torus and NSFnet networks, limited number of fibres is sufficient to guarantee high network performance. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Equivalence principle for optimization of sparse versus low-spread representations for signal estimation in noise

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2005
Radu V. Balan
Abstract Estimation of a sparse signal representation, one with the minimum number of nonzero components, is hard. In this paper, we show that for a nontrivial set of the input data the corresponding optimization problem is equivalent to and can be solved by an algorithm devised for a simpler optimization problem. The simpler optimization problem corresponds to estimation of signals under a low-spread constraint. The goal of the two optimization problems is to minimize the Euclidian norm of the linear approximation error with an lp penalty on the coefficients, for p = 0 (sparse) and p = 1 (low-spread), respectively. The l0 problem is hard, whereas the l1 problem can be solved efficiently by an iterative algorithm. Here we precisely define the l0 optimization problem, construct an associated l1 optimization problem, and show that for a set with open interior of the input data the optimizers of the two optimization problems have the same support. The associated l1 optimization problem is used to find the support of the l0 optimizer. Once the support of the l0 problem is known, the actual solution is easily found by solving a linear system of equations. However, we point out our approach does not solve the harder optimization problem for all input data and thus may fail to produce the optimal solution in some cases. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 10,17, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20034 [source]


Autonomic self-organization architecture for wireless sensor communications

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 3 2007
Jiann-Liang Chen
Wireless sensor nodes may be spread over large areas and long distances, and require multi-hop communications between nodes, making direct management numerous wireless sensor nodes inefficient. Hierarchical management can be adopted to control several nodes. Effectively controlling the top-level nodes can decrease the costs of managing nodes and of the communication among them. The lower-level nodes are controlled and organized with the higher-level nodes. This study presents an algorithm for self-organization mechanism of higher-level nodes, contesting member nodes by multi-hop to form hierarchical clusters, and applying the ,20/80 rule' to determine the ratio of headers to member nodes. Furthermore, the broadcast tree is constructed with the minimum number of hops. Simulation results indicate that the mechanism has a 6,22% lower cover loss than other approaches. The average delay of the minimum hop count approach is 0.22,1.57ms less than that of free hop count approach. The simulation also reveals the influence of 20/80 rule on cluster formation between sensor nodes. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Synthesis of symmetric and asymmetric singly terminated elliptic ladder filters for multiplexing applications

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2009
José R. Montejo-Garai
Abstract An extension of the Cauer ladder development for synthesizing singly terminated filters with symmetric and asymmetric responses is presented. Basically, a driving-point immittance including reactive constant elements is carried out in such a way that provides the transmission zeros. The reactive constant elements are introduced into the synthesis for two reasons. The first is to consider the possibility of the asymmetric position of transmission zeros in the real frequency axis. The second one is to obtain canonical forms, i.e. networks with the minimum number of elements in the case of symmetrical responses. To validate the proposed method, a filter with asymmetrical response has been synthesized, comparing different topologies for its use in multiplexers. This fact is illustrated with a Ku-band elliptic response diplexer designed in H-plane rectangular waveguide. © 2009 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009. [source]


Application of genetic algorithm for scheduling and schedule coordination problems

JOURNAL OF ADVANCED TRANSPORTATION, Issue 1 2002
Prabhat Shrivastava
The problems on scheduling and schedule co-ordination usually have conflicting objectives related to user's cost and operator's cost. Users want to spend less time to wait, transfer and travel by public buses. Operators are interested in profit making by lesser vehicle operating cost and having a minimum number of buses. As far as level of service is concerned users are interested in lesser crowing while operators are concerned with maximizing profit and thus to have higher load factors. In schedule co-ordination problems transfer time plays an important role. Users are interested in coordinating services with in acceptable waiting time whereas operators prefer to have lesser services and want to meet higher demands, which invariably increases waiting time. These problems have multiple conflicting objectives and constraints. It is difficult to determine optimum solution for such problems with the help of conventional approaches. It is found that Genetic Algorithm performs well for such multi objective problems. [source]


The number of CD34+ cells in peripheral blood as a predictor of the CD34+ yield in patients going to autologous stem cell transplantation

JOURNAL OF CLINICAL APHERESIS, Issue 2 2006
A.L. Basquiera
Abstract The number of CD34+ cells in peripheral blood (PB) is a guide to the optimal timing to harvest peripheral blood progenitor cells (PBPC). The objective was to determine the number of CD34+ cells in PB that allows achieving a final apheresis product containing ,1.5 × 106 CD34+ cells/kg, performing up to three aphereses. Between March 1999 and August 2003, patients with hematological and solid malignancies who underwent leukapheresis for autologous bone marrow transplantation were prospectively evaluated. Seventy-two aphereses in 48 patients were performed (mean 1.45 per patient; range 1,3). PBPC were mobilized with cyclophosphamide plus recombinant human granulocyte-colony stimulating factor (G-CSF) (n = 40), other chemotherapy drugs plus G-CSF (n = 7), or G-CSF alone (n = 1). We found a strong correlation between the CD34+ cells count in peripheral blood and the CD34+ cells yielded (r = 0.903; P < 0.0001). Using receiver-operating characteristic (ROC) curves, the minimum number of CD34+ cells in PB to obtain ,1.5 × 106/kg in the first apheresis was 16.48 cells/,L (sensitivity 100%; specificity 95%). The best cut-off point necessary to obtain the same target in the final harvest was 15.48 cells/,L, performing up to three aphereses (sensitivity 89%; specificity 100%). In our experience, ,15 CD34+ cells/,L is the best predictor to begin the apheresis procedure. Based on this threshold level, it is possible to achieve at least 1.5 × 106/kg CD34+ cells in the graft with ,3 collections. J. Clin. Apheresis 2005. © 2005 Wiley-Liss, Inc. [source]


Mixture Interpretation: Defining the Relevant Features for Guidelines for the Assessment of Mixed DNA Profiles in Forensic Casework,

JOURNAL OF FORENSIC SCIENCES, Issue 4 2009
Bruce Budowle Ph.D.
Abstract:, Currently in the United States there is little direction for what constitutes sufficient guidelines for DNA mixture interpretation. While a standardized approach is not possible or desirable, more definition is necessary to ensure reliable interpretation of results is carried out. In addition, qualified DNA examiners should be able to review reports and understand the assumptions made by the analyst who performed the interpretation. Interpretation of DNA mixture profiles requires consideration of a number of aspects of a mixed profile, many of which need to be established by on-site, internal validation studies conducted by a laboratory's technical staff, prior to performing casework analysis. The relevant features include: criteria for identification of mixed specimens, establishing detection and interpretation threshold values, defining allele peaks, defining nonallele peaks, identifying artifacts, consideration of tri-allelic patterns, estimating the minimum number of contributors, resolving components of a mixture, determining when a portion of the mixed profile can be treated as a single source profile, consideration of potential additive effects of allele sharing, impact of stutter peaks on interpretation in the presence of a minor contributor, comparison with reference specimens, and some issues related to the application of mixture calculation statistics. Equally important is using sensible judgment based on sound and documented principles of DNA analyses. Assumptions should be documented so that reliable descriptive information is conveyed adequately concerning that mixture and what were the bases for the interpretations that were carried out. Examples are provided to guide the community. Interpretation guidelines also should incorporate strategies to minimize potential bias that could occur by making inferences based on a reference sample. The intent of this paper is to promote more thought, provide assistance on many aspects for consideration, and to support that more formalized mixture interpretation guidelines are developed. [source]


Skeletal Estimation and Identification in American and East European Populations,

JOURNAL OF FORENSIC SCIENCES, Issue 3 2008
Erin H. Kimmerle Ph.D.
Abstract:, Forensic science is a fundamental transitional justice issue as it is imperative for providing physical evidence of crimes committed and a framework for interpreting evidence and prosecuting violations to International Humanitarian Law (IHL). The evaluation of evidence presented in IHL trials and the outcomes various rulings by such courts have in regard to the accuracy or validity of methods applied in future investigations is necessary to ensure scientific quality. Accounting for biological and statistical variation in the methods applied across populations and the ways in which such evidence is used in varying judicial systems is important because of the increasing amount of international forensic casework being done globally. Population variation or the perceived effect of such variation on the accuracy and reliability of methods is important as it may alter trial outcomes, and debates about the scientific basis for human variation are now making their way into international courtrooms. Anthropological data on population size (i.e., the minimum number of individuals in a grave), demographic structure (i.e., the age and sex distribution of victims), individual methods applied for identification, and general methods of excavation and trauma analysis have provided key evidence in cases of IHL. More generally, the question of population variation and the applicability of demographic methods for estimating individual and population variables is important for American and International casework in the face of regional population variation, immigrant populations, ethnic diversity, and secular changes. The reliability of various skeletal aging methods has been questioned in trials prosecuted by the International Criminal Tribunal for the Former Yugoslavia (ICTY) in The Prosecutor of the Tribunal against Radislav Krsti, (Case No. IT-98-33, Trial Judgment) and again in the currently ongoing trial of The Prosecutor of the Tribunal against Zdravko Tolimir, Radivolje Mileti,, Milan Gvero, Vinko Pandurevi,, Ljubisa Beara, Vujadin Popovi,, Drago Nikoli,, Milorad Trbi,, Ljubomir Borovcanin (IT-05-88-PT, Second Amended Indictment). Following the trial of General Krsti,, a collaborative research project was developed between the Forensic Anthropology Center at The University of Tennessee (UT) and the United Nations, International Criminal Tribunal for the Former Yugoslavia, Office of the Prosecutor (ICTY). The purpose of that collaboration was to investigate methods used for the demographic analysis of forensic evidence and where appropriate to recalibrate methods for individual estimation of age, sex, and stature for specific use in the regions of the former Yugoslavia. The question of "local standards" and challenges to the reliability of current anthropological methods for biological profiling in international trials of IHL, as well as the performance of such methods to meet the evidentiary standards used by international tribunals is investigated. Anthropological methods for estimating demographic parameters are reviewed. An overview of the ICTY-UT collaboration for research aimed at addressing specific legal issues is discussed and sample reliability for Balkan aging research is tested. The methods currently used throughout the Balkans are discussed and estimated demographic parameters obtained through medico-legal death investigations are compared with identified cases. Based on this investigation, recommendations for improving international protocols for evidence collection, presentation, and research are outlined. [source]


Survey of endoscopic ultrasonographic practice and training in the Asia-Pacific region

JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, Issue 8 2006
Khek Yu Ho
Abstract Background:, Little is known about the current status of endoscopic ultrasonography (EUS) training in the Asia,Pacific region. The aim of the present study was to assess EUS practice and training in the Asia,Pacific region and seek to identify areas where the development of EUS expertise could be further enhanced. Methods:, A direct mail survey was sent out to 87 practising endosonographers in various parts of the Asia,Pacific region outside of Japan. They were asked to report on their prior training, utilization of EUS, and EUS training in their country. Results:, The respondents (n = 71) were mostly young (median age 40 years), male (97%), practising in academia (36.6%) or public hospitals (50.7%) and fairly experienced (median 5 years) in EUS practices; they had performed a median of 500 procedures in their career. Among them, 49.3% were self-taught. Only 22.5% and 21.1% had undergone formal overseas fellowship lasting ,6 months, and local gastrointestinal fellowships of various durations, respectively. Fifty-six percent were currently involved in EUS teaching. Most (90%) thought that a formal EUS training fellowship is necessary for acquiring acceptable competence and there should be a minimum number (median 100) of supervised procedures performed and minimum amount of time (median 6 months) spent on training. Conclusions:, Although EUS practitioners in the Asia,Pacific region were not behind their European or US counterparts in hands-on experience, the lack of formal EUS training programs and opportunities remains an area of concern. For the region to increase EUS utilization, the current shortage of training opportunities needs to be addressed. [source]