Multiple Versions (multiple + version)

Distribution by Scientific Domains


Selected Abstracts


A Polylinker Approach to Reductive Loop Swaps in Modular Polyketide Synthases

CHEMBIOCHEM, Issue 16 2008
Laurenz Kellenberger Dr.
Abstract Multiple versions of the DEBS 1-TE gene, which encodes a truncated bimodular polyketide synthase (PKS) derived from the erythromycin-producing PKS, were created by replacing the DNA encoding the ketoreductase (KR) domain in the second extension module by either of two synthetic oligonucleotide linkers. This made available a total of nine unique restriction sites for engineering. The DNA for donor "reductive loops," which are sets of contiguous domains comprising either KR or KR and dehydratase (DH), or KR, DH and enoylreductase (ER) domains, was cloned from selected modules of five natural PKS multienzymes and spliced into module 2 of DEBS 1-TE using alternative polylinker sites. The resulting hybrid PKSs were tested for triketide production in vivo. Most of the hybrid multienzymes were active, vindicating the treatment of the reductive loop as a single structural unit, but yields were dependent on the restriction sites used. Further, different donor reductive loops worked optimally with different splice sites. For those reductive loops comprising DH, ER and KR domains, premature TE-catalysed release of partially reduced intermediates was sometimes seen, which provided further insight into the overall stereochemistry of reduction in those modules. Analysis of loops containing KR only, which should generate stereocentres at both C-2 and C-3, revealed that the 3-hydroxy configuration (but not the 2-methyl configuration) could be altered by appropriate choice of a donor loop. The successful swapping of reductive loops provides an interesting parallel to a recently suggested pathway for the natural evolution of modular PKSs by recombination. [source]


On holographic transform compression of images

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2000
Alfred M. Bruckstein
Abstract Lossy transform compression of images is successful and widespread. The JPEG standard uses the discrete cosine transform on blocks of the image and a bit allocation process that takes advantage of the uneven energy distribution in the transform domain. For most images, 10:1 compression ratios can be achieved with no visible degradations. However, suppose that multiple versions of the compressed image exist in a distributed environment such as the internet, and several of them could be made available upon request. The classical approach would provide no improvement in the image quality if more than one version of the compressed image became available. In this paper, we propose a method, based on multiple description scalar quantization, that yields decompressed image quality that improves with the number of compressed versions available. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 11, 292,314, 2000 [source]


Transcoding media for bandwidth constrained mobile devices

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 2 2005
Kevin Curran
Bandwidth is an important consideration when dealing with streaming media. More bandwidth is required for complex data such as video as opposed to a simple audio file. When delivering streaming media, sufficient bandwidth is required to achieve an acceptable level of performance. If the information streamed exceeds the bandwidth capacity of the client the result will be ,choppy' and incomplete with possible loss of transmission. Transcoding typically refers to the adaptation of streaming content. Typical transcoding scenarios exploit content-negotiation to negotiate between different formats in order to obtain the most optimal combination of requested quality and available resources. It is possible to transcode media to a lesser quality or size upon encountering adverse bandwidth conditions. This can be accomplished without the need to encode multiple versions of the same file at differing quality levels. This study investigates the capability of transcoding for coping with restrictions in client devices. In addition, the properties of transcoded media files are examined and evaluated to determine their applicability for streaming in relation to a range of broad device types capable of receiving streaming media.,Copyright © 2005 John Wiley & Sons, Ltd. [source]


A SIBTEST Approach to Testing DIF Hypotheses Using Experimentally Designed Test Items

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 4 2000
Daniel M. Bolt
This paper considers a modification of the DIF procedure SIBTEST for investigating the causes of differential item functioning (DIF). One way in which factors believed to be responsible for DIF can be investigated is by systematically manipulating them across multiple versions of an item using a randomized DIF study (Schmitt, Holland, & Dorans, 1993). In this paper: it is shown that the additivity of the index used for testing DIF in SIBTEST motivates a new extension of the method for statistically testing the effects of DIF factors. Because an important consideration is whether or not a studied DIF factor is consistent in its effects across items, a methodology for testing item x factor interactions is also presented. Using data from the mathematical sections of the Scholastic Assessment Test (SAT), the effects of two potential DIF factors,item format (multiple-choice versus open-ended) and problem type (abstract versus concrete),are investigated for gender Results suggest a small but statistically significant and consistent effect of item format (favoring males for multiple-choice items) across items, and a larger but less consistent effect due to problem type. [source]


A case study in repeated maintenance

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2001
Shuanglin Wang
Abstract RTP is a widely used commercial real-time product that has been maintained over a period of 13 years. We have analyzed multiple versions of RTP, which is written in C and Assembler. We measured increases in dependencies within the code between successive versions and performed statistical analyses on the data. There was no significant difference between the maintenance of Assembler files and C files. Also, there was no significant difference between the versions written by the original developers and those written by maintenance programmers not involved in the original development. The differences between individual programmers were very highly significant. Our interpretation of these results is that the skill of the individual programmer is an important factor in ensuring that a software product remains maintainable over its lifetime and that software engineering education and training are therefore of major importance. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Methods for identifying versioned and plagiarized documents

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 3 2003
Timothy C. Hoad
The widespread use of on-line publishing of text promotes storage of multiple versions of documents and mirroring of documents in multiple locations, and greatly simplifies the task of plagiarizing the work of others. We evaluate two families of methods for searching a collection to find documents that are coderivative, that is, are versions or plagiarisms of each other. The first, the ranking family, uses information retrieval techniques; extending this family, we propose the identity measure, which is specifically designed for identification of coderivative documents. The second, the fingerprinting family, uses hashing to generate a compact document description, which can then be compared to the fingerprints of the documents in the collection. We introduce a new method for evaluating the effectiveness of these techniques, and demonstrate it in practice. Using experiments on two collections, we demonstrate that the identity measure and the best fingerprinting technique are both able to accurately identify coderivative documents. However, for fingerprinting parameters must be carefully chosen, and even so the identity measure is clearly superior. [source]


Multiple-Imputation-Based Residuals and Diagnostic Plots for Joint Models of Longitudinal and Survival Outcomes

BIOMETRICS, Issue 1 2010
Dimitris Rizopoulos
Summary The majority of the statistical literature for the joint modeling of longitudinal and time-to-event data has focused on the development of models that aim at capturing specific aspects of the motivating case studies. However, little attention has been given to the development of diagnostic and model-assessment tools. The main difficulty in using standard model diagnostics in joint models is the nonrandom dropout in the longitudinal outcome caused by the occurrence of events. In particular, the reference distribution of statistics, such as the residuals, in missing data settings is not directly available and complex calculations are required to derive it. In this article, we propose a multiple-imputation-based approach for creating multiple versions of the completed data set under the assumed joint model. Residuals and diagnostic plots for the complete data model can then be calculated based on these imputed data sets. Our proposals are exemplified using two real data sets. [source]