Log N (log + n)

Distribution by Scientific Domains

Kinds of Log N

  • n log n

  • Selected Abstracts

    Adaptive integral method combined with the loose GMRES algorithm for planar structures analysis

    W. Zhuang
    Abstract In this article, the adaptive integral method (AIM) is used to analyze large-scale planar structures. Discretization of the corresponding integral equations by method of moment (MoM) with Rao-Wilton-Glisson (RWG) basis functions can model arbitrarily shaped planar structures, but usually leads to a fully populated matrix. AIM could map these basis functions onto a rectangular grid, where the Toeplitz property of the Green's function would be utilized, which enables the calculation of the matrix-vector multiplication by use of the fast Fourier transform (FFT) technique. It reduces the memory requirement from O(N2) to O(N) and the operation complexity from O(N2) to O(N log N), where N is the number of unknowns. The resultant equations are then solved by the loose generalized minimal residual method (LGMRES) to accelerate iteration, which converges much faster than the conventional conjugate gradient method (CG). Furthermore, several preconditioning techniques are employed to enhance the computational efficiency of the LGMRES. Some typical microstrip circuits and microstrip antenna array are analyzed and numerical results show that the preconditioned LGMRES can converge much faster than conventional LGMRES. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009. [source]

    Bistatic phase function and fast solution of scattering by 2D random distributed scatterers

    Jianjun Guo
    Abstract We present large-scale Monte Carlo simulation results of the phase functions in multiple scattering by dense media of small 2D particles. Solution of the Foldy,Lax equations with large number of unknowns is done efficiently using the sparse-matrix canonical-grid (SMCG) method. The SMCG method facilitates the use of FFT and results in an N log N -type efficiency for CPU and O(N) for memory. This dependence is demonstrated by the simulation of CPU time using up to 50000 particles that are randomly distributed through random walk in a large area of 400 square wavelengths. The bistatic phase functions for a random medium are computed. The phase function converges with the number of particles and the number of realizations. The simulation results indicate that the nonsticky particles, sticky particles, and independent scattering have similar angular distribution patterns of the phase functions. However, the dense sticky particles show stronger scattering than the independent scattering, while the dense nonsticky particles have smaller scattering than that of the independent scattering. © 2003 Wiley Periodicals, Inc. Microwave Opt Technol Lett 38: 313,317, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.11047 [source]

    Applications of transformed-space non-uniform PSTD (TSNU-PSTD) in scattering analysis without the use of the non-uniform FFT

    Xiaoping Liu
    Abstract In this work, we extend the transformed-space, non-uniform pseudo-spectral time domain (TSNU-PSTD) Maxwell solver for a 2D scattering analysis. Prior to implementing the PSTD in this analysis, we first transform the non-uniform grids {xi} and {yj} sampled in the real space for describing complex geometries to uniform ones {ui} and {vj}, in order to fit the dimensions of practical structures and utilize the standard fast Fourier transform (FFT). Next, we use a uniform-sampled, standard FFT to represent spatial derivatives in the space domain of (u, v). It is found that this scheme is as efficient as the conventional uniform PSTD with the computational complexity of O(N log N), since the difference is only the factors of du/dx and dv/dy between the conventional PSTD and the TSNU-PSTD technique. Additionally, we apply an anisotropic version of the Berenger's perfectly matched layers (APML) to suppress the wraparound effect at the open boundaries of the computational domain, which is caused by the periodicity of the FFT. We also employ the pure scattered-field formulation and develop a near-to-far-zone field transformation in order to calculate scattered far fields. © 2003 Wiley Periodicals, Inc. Microwave Opt Technol Lett 38: 16,21, 2003 [source]

    A homogeneous sample of sub-damped Lyman systems , IV.

    Global metallicity evolution
    ABSTRACT An accurate method to measure the abundance of high-redshift galaxies involves the observation of absorbers along the line of sight towards a background quasar. Here, we present abundance measurements of 13 z, 3 sub-damped Lyman , (sub-DLA) systems (quasar absorbers with H i column density in the range 19 < log N(H i) < 20.3 cm,2) based on high-resolution observations with the VLT UVES spectrograph. These observations more than double the amount of metallicity information for sub-DLAs available at z > 3. These new data, combined with other sub-DLA measurements from the literature, confirm the stronger evolution of metallicity with redshift for sub-DLAs than for the classical damped Lyman , absorbers. In addition, these observations are used to compute for the first time, using photoionization modelling in a sample of sub-DLAs, the fraction of gas that is ionized. Based on these results, we calculate that sub-DLAs contribute no more than 6 per cent of the expected amount of metals at z, 2.5. We therefore conclude that, even if sub-DLAs are found to be more metal-rich than classical DLAs, their contribution is insufficient to solve the so-called ,missing-metals' problem. [source]

    Coincident, 100 kpc scale damped Ly, absorption towards a binary QSO: how large are galaxies at z, 3?

    Sara L. Ellison
    ABSTRACT We report coincident damped Ly, (DLA) and sub-DLA absorption at zabs= 2.66 and zabs= 2.94 towards the z, 3, 13.8 arcsec separation binary quasar SDSS 1116+4118 AB. At the redshifts of the absorbers, this angular separation corresponds to a proper transverse separation of ,110 h,170 kpc. A third absorber, a sub-DLA at zabs= 2.47, is detected towards SDSS 1116+4118 B, but no corresponding high column density absorber is present towards SDSS 1116+4118 A. We use high-resolution galaxy simulations and a clustering analysis to interpret the coincident absorption and its implications for galaxy structure at z, 3. We conclude that the common absorption in the two lines of sight is unlikely to arise from a single galaxy, or a galaxy plus satellite system, and is more feasibly explained by a group of two or more galaxies with separations ,100 kpc. The impact of these findings on single line-of-sight observations is also discussed; we show that abundances of DLAs may be affected by up to a few tenths of a dex by line-of-sight DLA blending. From a Keck Echellette Spectrograph and Imager spectrum of the two quasars, we measure metal column densities for all five absorbers and determine abundances for the three absorbers with log N(H i) > 20. For the two highest N(H i) absorbers, we determine high levels of metal enrichment, corresponding to 1/3 and 1/5 Z,. These metallicities are amongst the highest measured for DLAs at any redshift and are consistent with values measured in Lyman-break galaxies at 2 < z < 3. For the DLA at zabs= 2.94 we also infer an approximately solar ratio of ,-to-Fe peak elements from [S/Zn]=+0.05, and measure an upper limit for the molecular fraction in this particular line of sight of log f (H 2) < ,5.5. [source]

    The ROSAT Brightest Cluster Sample , IV.

    The extended sample
    We present a low-flux extension of the X-ray-selected ROSAT Brightest Cluster Sample (BCS) published in Paper I of this series. Like the original BCS and employing an identical selection procedure, the BCS extension is compiled from ROSAT All-Sky Survey (RASS) data in the northern hemisphere (,,0°) and at high Galactic latitudes (|b|,20°). It comprises 99 X-ray-selected clusters of galaxies with measured redshifts z,0.3 (as well as eight more at z>0.3) and total fluxes between 2.8×10,12 and 4.4×10,12 erg cm,2 s,1 in the 0.1,2.4 keV band (the latter value being the flux limit of the original BCS). The extension can be combined with the main sample published in 1998 to form the homogeneously selected extended BCS (eBCS), the largest and statistically best understood cluster sample to emerge from the RASS to date. The nominal completeness of the combined sample (defined with respect to a power-law fit to the bright end of the BCS log N,log S distribution) is relatively low at 75 per cent (compared with 90 per cent for the high-flux sample of Paper I). However, just as for the original BCS, this incompleteness can be accurately quantified, and thus statistically corrected for, as a function of X-ray luminosity and redshift. In addition to its importance for improved statistical studies of the properties of clusters in the local Universe, the low-flux extension of the BCS is also intended to serve as a finding list for X-ray-bright clusters in the northern hemisphere which we hope will prove useful in the preparation of cluster observations with the next generation of X-ray telescopes such as Chandra and XMM-Newton. An electronic version of the eBCS can be obtained from the following URL: http://www.ifa.hawaii.edu/~ebeling/clusters/BCS.html. [source]

    The two-median problem on Manhattan meshes

    Mordecai J. Golin
    Abstract We investigate the two-median problem on a mesh with M columns and N rows (M , N), under the Manhattan (L1) metric. We derive exact algorithms with respect to m, n, and r, the number of columns, rows, and vertices, respectively, that contain requests. Specifically, we give an O(mn2 log m) time, O(r) space algorithm for general (nonuniform) meshes (assuming m , n). For uniform meshes, we give two algorithms both using O(MN) space. One is an O(MN2) time algorithm, while the other is an algorithm running in O(MN log N) time with high probability and in O(MN2) time in the worst case assuming the weights are independent and identically distributed random variables satisfying certain natural conditions. These improve upon the previously best-known algorithm that runs in O(mn2r) time. © 2007 Wiley Periodicals, Inc. NETWORKS, Vol. 49(3), 226,233 2007 [source]

    Analysis of a circulant based preconditioner for a class of lower rank extracted systems

    S. Salapaka
    Abstract This paper proposes and studies the performance of a preconditioner suitable for solving a class of symmetric positive definite systems, Apx=b, which we call lower rank extracted systems (LRES), by the preconditioned conjugate gradient method. These systems correspond to integral equations with convolution kernels defined on a union of many line segments in contrast to only one line segment in the case of Toeplitz systems. The p × p matrix, Ap, is shown to be a principal submatrix of a larger N × N Toeplitz matrix, AN. The preconditioner is provided in terms of the inverse of a 2N × 2N circulant matrix constructed from the elements of AN. The preconditioner is shown to yield clustering in the spectrum of the preconditioned matrix similar to the clustering results for iterative algorithms used to solve Toeplitz systems. The analysis also demonstrates that the computational expense to solve LRE systems is reduced to O(N log N). Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Measuring ,M and ,, with long-duration gamma-ray bursts

    A. Balastegui
    Abstract Gamma-ray bursts (GRBs) are one of the most luminous events in the Universe. In addition, the Universe itself is almost transparent to , -rays, making GRBs detectable up to very high redshifts. As a result, GRBs are very suitable to probe the cosmological parameters. This work shows the potential of long-duration GRBs for measuring the cosmological parameters ,M and ,, by comparing the observed log N -log P distribution with the theoretical one. Provided that the GRBs rate and luminosity function are well determined, the best values and 1, confidence intervals obtained are ,M = 0.22+0.05,0.03 and ,, = 1.06+0.05,0.10. Finally, a set of simulations show the ability of the method to measure ,M and ,, (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    A Comparison of Tabular PDF Inversion Methods

    D. Cline
    I.3.0 [Computer Graphics]: General Abstract The most common form of tabular inversion used in computer graphics is to compute the cumulative distribution table of a probability distribution (PDF) and then search within it to transform points, using an,O(log n),binary search. Besides the standard inversion method, however, several other discrete inversion algorithms exist that can perform the same transformation inO(1) time per point. In this paper, we examine the performance of three of these alternate methods, two of which are new. [source]

    Parallel divide-and-conquer scheme for 2D Delaunay triangulation

    Min-Bin Chen
    Abstract This work describes a parallel divide-and-conquer Delaunay triangulation scheme. This algorithm finds the affected zone, which covers the triangulation and may be modified when two sub-block triangulations are merged. Finding the affected zone can reduce the amount of data required to be transmitted between processors. The time complexity of the divide-and-conquer scheme remains O(n log n), and the affected region can be located in O(n) time steps, where n denotes the number of points. The code was implemented with C, FORTRAN and MPI, making it portable to many computer systems. Experimental results on an IBM SP2 show that a parallel efficiency of 44,95% for general distributions can be attained on a 16-node distributed memory system. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Simultaneous diagonal flips in plane triangulations,

    Prosenjit Bose
    Abstract Simultaneous diagonal flips in plane triangulations are investigated. It is proved that every triangulation with n,,,6 vertices has a simultaneous flip into a 4-connected triangulation, and that the set of edges to be flipped can be computed in (n) time. It follows that every triangulation has a simultaneous flip into a Hamiltonian triangulation. This result is used to prove that for any two n -vertex triangulations, there exists a sequence of (logn) simultaneous flips to transform one into the other. Moreover, ,(log n) simultaneous flips are needed for some pairs of triangulations. The total number of edges flipped in this sequence is (n). The maximum size of a simultaneous flip is then studied. It is proved that every triangulation has a simultaneous flip of at least edges. On the other hand, every simultaneous flip has at most n , 2 edges, and there exist triangulations with a maximum simultaneous flip of edges. © 2006 Wiley Periodicals, Inc. J Graph Theory 54: 307,330, 2007 [source]

    On the complexity of Rocchio's similarity-based relevance feedback algorithm

    Zhixiang Chen
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d2(log d + log n)) over the discretized vector space {0,,, n , 1}d, when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier over {0,,, n , 1}d can be improved to, at most, 1 + 2k (n , 1) (log d , log(n , 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound on its learning complexity over the Boolean vector space {0, 1}d. [source]

    Model Selection for Broadband Semiparametric Estimation of Long Memory in Time Series

    Clifford M. Hurvich
    We study the properties of Mallows' CL criterion for selecting a fractional exponential (FEXP) model for a Gaussian long-memory time series. The aim is to minimize the mean squared error of a corresponding regression estimator dFEXP of the memory parameter, d. Under conditions which do not require that the data were actually generated by a FEXP model, it is known that the mean squared error MSE=E[dFEXP,d]2 can converge to zero as fast as (log n)/n, where n is the sample size, assuming that the number of parameters grows slowly with n in a deterministic fashion. Here, we suppose that the number of parameters in the FEXP model is chosen so as to minimize a local version of CL, restricted to frequencies in a neighborhood of zero. We show that, under appropriate conditions, the expected value of the local CL is asymptotically equivalent to MSE. A combination of theoretical and simulation results give guidance as to the choice of the degree of locality in CL. [source]

    The backup 2-center and backup 2-median problems on trees

    Hung-Lung Wang
    Abstract In this paper, we are concerned with the problem of deploying two servers in a tree network, where each server may fail with a given probability. Once a server fails, the other server will take full responsibility for the services. Here, we assume that the servers do not fail simultaneously. In the backup 2-center problem, we want to deploy two servers at the vertices such that the expected distance from a farthest vertex to the closest functioning server is minimum. In the backup 2-median problem, we want to deploy two servers at the vertices such that the expected sum of distances from all vertices to the set of functioning servers is minimum. We propose an O(n)-time algorithm for the backup 2-center problem and an O(n log n)-time algorithm for the backup 2-median problem, where n is the number of vertices in the given tree network. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009 [source]

    Minimum multiple message broadcast graphs

    Hovhannes A. Harutyunyan
    Abstract Multiple message broadcasting is the process of multiple message dissemination in a communication network in which m messages, originated by one vertex, are transmitted to all vertices of the network. A graph G with n vertices is called a m-message broadcast graph if its broadcast time is the theoretical minimum. Bm(n) is the minimum number of edges in any m-message broadcast graph on n vertices. An m-message minimum broadcast graph is a broadcast graph G on n vertices having Bm(n) edges. This article presents several lower and upper bounds on Bm(n). In particular, it is shown that modified Knödel graphs are m-message broadcast graphs for m , min,log n,,n , 2,log n,. From the Cartesian product of some broadcast graphs we obtain better upper bounds on Bm(n), and in some cases we can prove that Bm(n) = O(n). The exact value of B2(2k) is also established. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 47(4), 218,224 2006 [source]

    The bi-criteria doubly weighted center-median path problem on a tree

    J. Puerto
    Abstract Given a tree network T with n nodes, let ,,L be the subset of all discrete paths whose length is bounded above by a prespecified value L. We consider the location of a path-shaped facility P , ,,L, where customers are represented by the nodes of the tree. We use a bi-criteria model to represent the total transportation cost of the customers to the facility. Each node is associated with a pair of nonnegative weights: the center-weight and the median-weight. In this doubly weighted model, a path P is assigned a pair of values (MAX(P),SUM(P)), which are, respectively, the maximum center-weighted distance and the sum of the median-weighted distances from P to the nodes of the tree. Viewing ,,L and the planar set {(MAX(P),SUM(P)) : P , ,,L} as the decision space and the bi-criteria or outcome space respectively, we focus on finding all the nondominated points of the bi-criteria space. We prove that there are at most 2n nondominated outcomes, even though the total number of efficient paths can be ,(n2), and they can all be generated in O(n log n) optimal time. We apply this result to solve the cent-dian model, whose objective is a convex combination of the weighted center and weighted median functions. We also solve the restricted models, where the goal is to minimize one of the two functions MAX or SUM, subject to an upper bound on the other one, both with and without a constraint on the length of the path. All these problems are solved in linear time, once the set of nondominated outcomes has been obtained, which in turn, results in an overall complexity of O(n log n). The latter bounds improve upon the best known results by a factor of O(log n). © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 47(4), 237,247 2006 [source]

    On d -threshold graphs and d -dimensional bin packing

    Alberto Caprara
    Abstract We illustrate efficient algorithms to find a maximum stable set and a maximum matching in a graph with n nodes given by the edge union of d threshold graphs on the same node set, in case the d graphs in the union are known. Actually, because the edge set of a threshold graph can be implicitly represented by assigning values to the nodes, we assume that we know these values for each of the d graphs in the union. We present an O(n log n + nd,1) time algorithm to find a maximum stable set and an O(n2) time algorithm to find a maximum matching, in case d is constant. For the case d = 2, the running time of the latter is reduced to O(n log n) provided an additional technical condition is satisfied. The natural application of our results is the fast computation of lower bounds for the d -dimensional bin packing problem, for which the compatibility relations between items are represented by the edge union of d threshold graphs with one node for each item, the value of the node for the i -th graph being equal to the size of the item on the i -th dimension. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 44(4), 266,280 2004 [source]

    Efficient algorithms for centers and medians in interval and circular-arc graphs

    Sergei Bespamyatnikh
    Abstract The p -center problem is to locate p facilities on a network so as to minimize the largest distance from a demand point to its nearest facility. The p -median problem is to locate p facilities on a network so as to minimize the average distance from a demand point to its closest facility. We consider these problems when the network can be modeled by an interval or circular-arc graph whose edges have unit lengths. We provide, given the interval model of an n vertex interval graph, an O(n) time algorithm for the 1-median problem on the interval graph. We also show how to solve the p -median problem, for arbitrary p, on an interval graph in O(pn log n) time and on a circular-arc graph in O(pn2 log n) time. We introduce a spring representation of the objective function and show how to solve the p -center problem on a circular-arc graph in O(pn) time, assuming that the arc endpoints are sorted. © 2002 Wiley Periodicals, Inc. [source]

    Robust location problems with pos/neg weights on a tree

    Rainer E. Burkard
    Abstract In this paper, we consider different aspects of robust 1-median problems on a tree network with uncertain or dynamically changing edge lengths and vertex weights which can also take negative values. The dynamic nature of a parameter is modeled by a linear function of time. A linear algorithm is designed for the absolute dynamic robust 1-median problem on a tree. The dynamic robust deviation 1-median problem on a tree with n vertices is solved in O(n2 ,(n) log n) time, where ,(n) is the inverse Ackermann function. Examples show that both problems do not possess the vertex optimality property. The uncertainty is modeled by given intervals, in which each parameter can take a value randomly. The absolute robust 1-median problem with interval data, where vertex weights might also be negative, can be solved in linear time. The corresponding deviation problem can be solved in O(n2) time. © 2001 John Wiley & Sons, Inc. [source]

    Multigraph augmentation under biconnectivity and general edge-connectivity requirements ,

    Toshimasa Ishii
    Abstract Given an undirected multigraph G = (V, E) and a requirement function r,: () , Z+ (where () is the set of all pairs of vertices and Z+ is the set of nonnegative integers), we consider the problem of augmenting G by the smallest number of new edges so that the local edge-connectivity and vertex-connectivity between every pair x, y , V become at least r,(x, y) and two, respectively. In this paper, we show that the problem can be solved in O(n3(m + n) log(n2/(m + n))) time, where n and m are the numbers of vertices and pairs of adjacent vertices in G, respectively. This time complexity can be improved to O((nm + n2 log n) log n), in the case of the uniform requirement r,(x, y)= ,, for all x, y , V. Furthermore, for the general r,, we show that the augmentation problem that preserves the simplicity of the resulting graph can be solved in polynomial time for any fixed ,,* = max{r,(x, y) | x, y , V}. © 2001 John Wiley & Sons, Inc. [source]

    Rapid mixing of Gibbs sampling on graphs that are sparse on average

    Elchanan Mossel
    Abstract Gibbs sampling also known as Glauber dynamics is a popular technique for sampling high dimensional distributions defined on graphs. Of special interest is the behavior of Gibbs sampling on the Erd,s-Rényi random graph G(n,d/n), where each edge is chosen independently with probability d/n and d is fixed. While the average degree in G(n,d/n) is d(1 - o(1)), it contains many nodes of degree of order log n/log log n. The existence of nodes of almost logarithmic degrees implies that for many natural distributions defined on G(n,p) such as uniform coloring (with a constant number of colors) or the Ising model at any fixed inverse temperature ,, the mixing time of Gibbs sampling is at least n1+,(1/log log n). Recall that the Ising model with inverse temperature , defined on a graph G = (V,E) is the distribution over {±}Vgiven by . High degree nodes pose a technical challenge in proving polynomial time mixing of the dynamics for many models including the Ising model and coloring. Almost all known sufficient conditions in terms of , or number of colors needed for rapid mixing of Gibbs samplers are stated in terms of the maximum degree of the underlying graph. In this work, we show that for every d < , and the Ising model defined on G (n, d/n), there exists a ,d > 0, such that for all , < ,d with probability going to 1 as n ,,, the mixing time of the dynamics on G (n, d/n) is polynomial in n. Our results are the first polynomial time mixing results proven for a natural model on G (n, d/n) for d > 1 where the parameters of the model do not depend on n. They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than npolylog(n). Our proof exploits in novel ways the local tree like structure of Erd,s-Rényi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex v of the graph has a neighborhood N(v) of radius O(log n) in which the induced sub-graph is a tree union at most O(log n) edges and where for each simple path in N(v) the sum of the vertex degrees along the path is O(log n). Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for G(n, d/n) it applies for all external fields and , < ,d, where d tanh(,d) = 1 is the critical point for decay of correlation for the Ising model on G(n, d/n). © 2009 Wiley Periodicals, Inc. Random Struct. Alg., 2009 [source]

    More efficient queries in PCPs for NP and improved approximation hardness of maximum CSP

    Lars Engebretsen
    Abstract Samorodnitsky and Trevisan [STOC 2000, pp. 191,199] proved that there exists, for every positive integer k, a PCP for NP with O(log n) randomness, query complexity 2k + k2, free bit complexity 2k, completeness 1 - ,, and soundness 2 + ,. In this article, we devise a new "outer verifier," based on the layered label cover problem recently introduced by Dinur et al. [STOC 2003, pp. 595,601], and combine it with a new "inner verifier" that uses the query bits more efficiently than earlier verifiers. Our resulting theorem is that there exists, for every integer f , 2, every positive integer t , f(f - 1)/2, and every constant , > 0, a PCP for NP with O(log n) randomness, query complexity f + t, free bit complexity f, completeness 1 - ,, and soundness 2 - t + ,. As a corollary, there exists, for every integer q , 3 and every constant , > 0, a q -query PCP for NP with amortized query complexity 1 + + ,. This improves upon the result of Samorodnitsky and Trevisan with respect to query efficiency, i.e., the relation between soundness and the number of queries. Although the improvement may seem moderate,the construction of Samorodnitsky and Trevisan has amortized query complexity 1 + 2/,we also show in this article that combining our outer verifier with any natural candidate for a corresponding inner verifier gives a PCP that is less query efficient than the one we obtain.© 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source]

    Hitting time of large subsets of the hypercube

    Abstract We study the simple random walk on the n -dimensional hypercube, in particular its hitting times of large (possibly random) sets. We give simple conditions on these sets ensuring that the properly rescaled hitting time is asymptotically exponentially distributed, uniformly in the starting position of the walk. These conditions are then verified for percolation clouds with densities that are much smaller than (n log n) -1. A main motivation behind this article is the study of the so-called aging phenomenon in the Random Energy Model, the simplest model of a mean-field spin glass. Our results allow us to prove aging in the REM for all temperatures, thereby extending earlier results to their optimal temperature domain. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source]

    Phase transition of the minimum degree random multigraph process

    Mihyun Kang
    Abstract We study the phase transition of the minimum degree multigraph process. We prove that for a constant hg , 0.8607, with probability tending to 1 as n , ,, the graph consists of small components on O(log n) vertices when the number of edges of a graph generated so far is smaller than hgn, the largest component has order roughly n2/3 when the number of edges added is exactly hgn, and the graph consists of one giant component on ,(n) vertices and small components on O(log n) vertices when the number of edges added is larger than hgn. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2007 [source]

    Mixing in time and space for lattice spin systems: A combinatorial view

    Martin Dyer
    The paper considers spin systems on the d -dimensional integer lattice ,d with nearest-neighbor interactions. A sharp equivalence is proved between decay with distance of spin correlations (a spatial property of the equilibrium state) and rapid mixing of the Glauber dynamics (a temporal property of a Markov chain Monte Carlo algorithm). Specifically, we show that if the mixing time of the Glauber dynamics is O(n log n) then spin correlations decay exponentially fast with distance. We also prove the converse implication for monotone systems, and for general systems we prove that exponential decay of correlations implies O(n log n) mixing time of a dynamics that updates sufficiently large blocks (rather than single sites). While the above equivalence was already known to hold in various forms, we give proofs that are purely combinatorial and avoid the functional analysis machinery employed in previous proofs. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004 [source]

    Notes on the variance of the number of maxima in three dimensions

    Anna Carlsund
    Abstract We study the number of maxima on a d -dimensional cube. We give an exact expression for the variance in three dimensions, which is easy to express as a polynomial in log n and therefore can be compared with approximations. © 2003 Wiley Periodicals, Inc. Random Struct. Alg., 22: 440,447, 2003 [source]