| |||
Block Size (block + size)
Selected AbstractsAnatomy of a Pennine peat slide, Northern EnglandEARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2003Dr. Jeff Warburton Abstract This paper describes and analyses the structure and deposits of a large UK peat slide, located at Hart Hope in the North Pennines, northern England. This particular failure is unusual in that it occurred in the winter (February, 1995) and shows excellent preservation of the sedimentary structures and morphology, both at the failure scar and downstream. The slide was triggered by heavy rain and rapid snowmelt along the line of an active peatland stream flush. Detailed mapping of the slide area and downstream deposits demonstrate that the slide was initiated as a blocky mass that degenerated into a debris flow. The slide pattern was complex, with areas of extending and compressive movement. A wave-like motion may have been set up in the failure. Within the slide site there was relatively little variability in block size (b axis); however, downstream the block sizes decrease rapidly. Stability analysis suggests the area at the head of the scar is most susceptible to failure. A ,secondary' slide area is thought to have only been initiated once the main failure had occurred. Estimates of the velocity of the flowing peat mass as it entered the main stream channel indicate a flow velocity of approximately 10 m s,1, which rapidly decreases downstream. A sediment budget for the peat slide estimates the failed peat mass to be 30 800 t. However, sediment delivery to the stream channel was relatively low. About 37% of the failed mass entered the stream channel and, despite moving initially as debris flow, the amount of deposition along the stream course and on the downstream fan is small (only about 1%). The efficiency of fluvial systems in transporting the eroded peat is therefore high. Copyright © 2003 John Wiley & Sons, Ltd. [source] Considerations of the discontinuous deformation analysis on wave propagation problemsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2009Jiong Gu Abstract In rock engineering, the damage criteria of the rock mass under dynamic loads are generally governed by the threshold values of wave amplitudes, such as the peak particle velocity and the peak particle acceleration. Therefore, the prediction of wave attenuation across fractured rock mass is important on assessing the stability and damage of rock mass under dynamic loads. This paper aims to investigate the applications of the discontinuous deformation analysis (DDA) for modeling wave propagation problems in rock mass. Parametric studies are carried out to obtain an insight into the influencing factors on the accuracy of wave propagations, in terms of the block size, the boundary condition and the incident wave frequency. The reflected and transmitted waves from the interface between two materials are also numerically simulated. To study the tensile failure induced by the reflected wave, the spalling phenomena are modeled under various loading frequencies. The numerical results show that the DDA is capable of modeling the wave propagation in jointed rock mass with a good accuracy. Copyright © 2009 John Wiley & Sons, Ltd. [source] A development of the discontinuous deformation analysis for rock fall analysisINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2005Jian-Hong Wu Abstract Discontinuous deformation analysis (DDA), a discrete numerical analysis method, is used to simulate the behaviour of falling rock by applying a linear displacement function in the computations. However, when a block rotates, this linear function causes a change in block size called the free expansion phenomenon. In addition, this free expansion results in contact identification problems when the rotating blocks are close to each other. To solve this problem of misjudgment and to obtain a more precise simulation of the falling rock, a new method called Post-Contact Adjustment Method has been developed and applied to the program. The basic procedure of this new method can be divided into three stages: using the linear displacement function to generate the global matrix, introducing the non-linear displacement function to the contact identification, and applying it to update the co-ordinates of block vertices. This new method can be easily applied to the original DDA program, demonstrating better contact identification and size conservation results for falling rock problems than the original program. Copyright © 2005 John Wiley & Sons, Ltd. [source] Determination of block size in poly(ethylene oxide)- b -polystyrene block copolymers by matrix-assisted laser desorption/ionization time-of-flight mass spectrometryJOURNAL OF POLYMER SCIENCE (IN TWO SECTIONS), Issue 13 2009Marion Girod Abstract Characterization of block size in poly(ethylene oxide)- b -poly(styrene) (PEO- b -PS) block copolymers could be achieved by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) after scission of the macromolecules into their constituent blocks. The performed hydrolytic cleavage was demonstrated to specifically occur on the targeted ester function in the junction group, yielding two homopolymers consisting of the constitutive initial blocks. This approach allows the use of well-established MALDI protocols for a complete copolymer characterization while circumventing difficulties inherent to amphiphilic macromolecule ionization. Although the labile end-group in PS homopolymer was modified by the MALDI process, PS block size could be determined from MS data since polymer chains were shown to remain intact during ionization. This methodology has been validated for a PEO- b -PS sample series, with two PEO of number average molecular weight (Mn) of 2000 and 5000 g mol,1 and Mn(PS) ranging from 4000 to 21,000 g mol,1. Weight average molecular weight (Mw), and thus polydispersity index, could also be reached for each segment and were consistent with values obtained by size exclusion chromatography. This approach is particularly valuable in the case of amphiphilic copolymers for which Mn values as determined by liquid state nuclear magnetic resonance might be affected by micelle formation. © 2009 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 47: 3380,3390, 2009 [source] Block merging for off-line compressionJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2007Raymond Wan To bound memory consumption, most compression systems provide a facility that controls the amount of data that may be processed at once,usually as a block size, but sometimes as a direct megabyte limit. In this work we consider the Re-Pair mechanism of Larsson and Moffat (2000), which processes large messages as disjoint blocks to limit memory consumption. We show that the blocks emitted by Re-Pair can be postprocessed to yield further savings, and describe techniques that allow files of 500 MB or more to be compressed in a holistic manner using less than that much main memory. The block merging process we describe has the additional advantage of allowing new text to be appended to the end of the compressed file. [source] Linkage disequilibrium in the North American Holstein populationANIMAL GENETICS, Issue 3 2009E.-S. Kim Summary Linkage disequilibrium was estimated using 7119 single nucleotide polymorphism markers across the genome and 200 animals from the North American Holstein cattle population. The analysis of maternally inherited haplotypes revealed strong linkage disequilibrium (r2 > 0.8) in genomic regions of ,50 kb or less. While linkage disequilibrium decays as a function of genomic distance, genomic regions within genes showed greater linkage disequilibrium and greater variation in linkage disequilibrium compared with intergenic regions. Identification of haplotype blocks could characterize the most common haplotypes. Although maximum haplotype block size was over 1 Mb, mean block size was 26,113 kb by various definitions, which was larger than that observed in humans (,10 kb). Effective population size of the dairy cattle population was estimated from linkage disequilibrium between single nucleotide polymorphism marker pairs in various haplotype ranges. Rapid reduction of effective population size of dairy cattle was inferred from linkage disequilibrium in recent generations. This result implies a loss of genetic diversity because of the high rate of inbreeding and high selection intensity in dairy cattle. The pattern observed in this study indicated linkage disequilibrium in the current dairy cattle population could be exploited to refine mapping resolution. Changes in effective population size during past generations imply a necessity of plans to maintain polymorphism in the Holstein population. [source] Quantifying the Magnitude of Baseline Covariate Imbalances Resulting from Selection Bias in Randomized Clinical TrialsBIOMETRICAL JOURNAL, Issue 2 2005Vance W. Berger Abstract Selection bias is most common in observational studies, when patients select their own treatments or treatments are assigned based on patient characteristics, such as disease severity. This first-order selection bias, as we call it, is eliminated by randomization, but there is residual selection bias that may occur even in randomized trials which occurs when, subconsciously or otherwise, an investigator uses advance knowledge of upcoming treatment allocations as the basis for deciding whom to enroll. For example, patients more likely to respond may be preferentially enrolled when the active treatment is due to be allocated, and patients less likely to respond may be enrolled when the control group is due to be allocated. If the upcoming allocations can be observed in their entirety, then we will call the resulting selection bias second-order selection bias. Allocation concealment minimizes the ability to observe upcoming allocations, yet upcoming allocations may still be predicted (imperfectly), or even determined with certainty, if at least some of the previous allocations are known, and if restrictions (such as randomized blocks) were placed on the randomization. This mechanism, based on prediction but not observation of upcoming allocations, is the third-order selection bias that is controlled by perfectly successful masking, but without perfect masking is not controlled even by the combination of advance randomization and allocation concealment. Our purpose is to quantify the magnitude of baseline imbalance that can result from third-order selection bias when the randomized block procedure is used. The smaller the block sizes, the more accurately one can predict future treatment assignments in the same block as known previous assignments, so this magnitude will depend on the block size, as well as on the level of certainty about upcoming allocations required to bias the patient selection. We find that a binary covariate can, on average, be up to 50% unbalanced by third-order selection bias. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Anatomy of a Pennine peat slide, Northern EnglandEARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2003Dr. Jeff Warburton Abstract This paper describes and analyses the structure and deposits of a large UK peat slide, located at Hart Hope in the North Pennines, northern England. This particular failure is unusual in that it occurred in the winter (February, 1995) and shows excellent preservation of the sedimentary structures and morphology, both at the failure scar and downstream. The slide was triggered by heavy rain and rapid snowmelt along the line of an active peatland stream flush. Detailed mapping of the slide area and downstream deposits demonstrate that the slide was initiated as a blocky mass that degenerated into a debris flow. The slide pattern was complex, with areas of extending and compressive movement. A wave-like motion may have been set up in the failure. Within the slide site there was relatively little variability in block size (b axis); however, downstream the block sizes decrease rapidly. Stability analysis suggests the area at the head of the scar is most susceptible to failure. A ,secondary' slide area is thought to have only been initiated once the main failure had occurred. Estimates of the velocity of the flowing peat mass as it entered the main stream channel indicate a flow velocity of approximately 10 m s,1, which rapidly decreases downstream. A sediment budget for the peat slide estimates the failed peat mass to be 30 800 t. However, sediment delivery to the stream channel was relatively low. About 37% of the failed mass entered the stream channel and, despite moving initially as debris flow, the amount of deposition along the stream course and on the downstream fan is small (only about 1%). The efficiency of fluvial systems in transporting the eroded peat is therefore high. Copyright © 2003 John Wiley & Sons, Ltd. [source] Achieving a near-optimum erasure correction performance with low-complexity LDPC codesINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 5-6 2010Gianluigi Liva Abstract Low-density parity-check (LDPC) codes are shown to tightly approach the performance of idealized maximum distance separable (MDS) codes over memoryless erasure channels, under maximum likelihood (ML) decoding. This is possible down to low error rates and even for small and moderate block sizes. The decoding complexity of ML decoding is kept low thanks to a class of decoding algorithms, which exploit the sparseness of the parity-check matrix to reduce the complexity of Gaussian elimination. ML decoding of LDPC codes is reviewed at first. A performance comparison among various classes of LDPC codes is then carried out, including a comparison with fixed-rate Raptor codes for the same parameters. The results confirm that a judicious LDPC code design allows achieving a near-optimum performance over the erasure channel, with very low error floors. Furthermore, it is shown that LDPC and Raptor codes, under ML decoding, provide almost identical performance in terms of decoding failure probability vs. overhead. Copyright © 2010 John Wiley & Sons, Ltd. [source] Semi-random LDPC codes for CDMA communication over non-linear band-limited satellite channelsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2006Mohamed Adnan Landolsi Abstract This paper considers the application of low-density parity check (LDPC) error correcting codes to code division multiple access (CDMA) systems over satellite links. The adapted LDPC codes are selected from a special class of semi-random (SR) constructions characterized by low encoder complexity, and their performance is optimized by removing short cycles from the code bipartite graphs. Relative performance comparisons with turbo product codes (TPC) for rate 1/2 and short-to-moderate block sizes show some advantage for SR-LDPC, both in terms of bit error rate and complexity requirements. CDMA systems using these SR-LDPC codes and operating over non-linear, band-limited satellite links are analysed and their performance is investigated for a number of signal models and codes parameters. The numerical results show that SR-LDPC codes can offer good capacity improvements in terms of supportable number of users at a given bit error performance. Copyright © 2006 John Wiley & Sons, Ltd. [source] Comprehensive 2-D chromatography of random and block methacrylate copolymersJOURNAL OF SEPARATION SCIENCE, JSS, Issue 10 2010Monique van Hulst Abstract A comprehensive 2-D separation method was developed for the characterization of methacrylate copolymers. In both dimensions conditions were employed that give a critical separation for the homopolymer of one of the monomers in the copolymer, and exclusion behaviour for the other. The 2-D separation was realized by using a normal-phase column in one dimension and a reversed phase column in the other, and by precisely tuning the compositions of the two mobile phases employed. In the normal-phase dimension mixtures of THF and n -hexane or n -heptane were used as mobile phase, and in the reversed-phase dimension mixtures of ACN and THF. Moreover, stationary phase particles had to be selected for both columns that gave an exclusion window appropriate for the molecular size of the sample polymers to be characterized. The 2-D critical chromatography principle was tested with a polystyrene (PS)-polymethylmethacrylate (PMMA) block copolymer and with block and random polybutylmethacrylate (PBMA)-PMMA copolymers. Ideally, the retention time for a copolymer in both dimensions of this system would depend on the size of only one of the blocks, or on the contribution of only one of the monomers to the size of a random copolymer. However, it was found that the elution of the PS-PMMA block copolymer depended on the size of both blocks, even when the corresponding homopolymer of one of the monomers showed critical elution behaviour. Therefore, the method could not be calibrated for block sizes by using homopolymer standards alone. Still, it was shown that the method can be used to determine differences between samples (PS-PMMA and PBMA-PMMA) with respect to total molecular size or block sizes separately, or to average size and chemical composition for random copolymers. Block and random PBMA-PMMA copolymers showed a distinctly different pattern in the 2-D plots obtained with 2-D critical chromatography. This difference was shown to be related to the different procedures followed in the polymerization process, and the different molecular distributions resulting from these. [source] Quantifying the Magnitude of Baseline Covariate Imbalances Resulting from Selection Bias in Randomized Clinical TrialsBIOMETRICAL JOURNAL, Issue 2 2005Vance W. Berger Abstract Selection bias is most common in observational studies, when patients select their own treatments or treatments are assigned based on patient characteristics, such as disease severity. This first-order selection bias, as we call it, is eliminated by randomization, but there is residual selection bias that may occur even in randomized trials which occurs when, subconsciously or otherwise, an investigator uses advance knowledge of upcoming treatment allocations as the basis for deciding whom to enroll. For example, patients more likely to respond may be preferentially enrolled when the active treatment is due to be allocated, and patients less likely to respond may be enrolled when the control group is due to be allocated. If the upcoming allocations can be observed in their entirety, then we will call the resulting selection bias second-order selection bias. Allocation concealment minimizes the ability to observe upcoming allocations, yet upcoming allocations may still be predicted (imperfectly), or even determined with certainty, if at least some of the previous allocations are known, and if restrictions (such as randomized blocks) were placed on the randomization. This mechanism, based on prediction but not observation of upcoming allocations, is the third-order selection bias that is controlled by perfectly successful masking, but without perfect masking is not controlled even by the combination of advance randomization and allocation concealment. Our purpose is to quantify the magnitude of baseline imbalance that can result from third-order selection bias when the randomized block procedure is used. The smaller the block sizes, the more accurately one can predict future treatment assignments in the same block as known previous assignments, so this magnitude will depend on the block size, as well as on the level of certainty about upcoming allocations required to bias the patient selection. We find that a binary covariate can, on average, be up to 50% unbalanced by third-order selection bias. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] |