Minimal Set (minimal + set)

Distribution by Scientific Domains


Selected Abstracts


Gene signatures of pulmonary metastases of renal cell carcinoma reflect the disease-free interval and the number of metastases per patient

INTERNATIONAL JOURNAL OF CANCER, Issue 2 2009
Daniela Wuttig
Abstract Our understanding of metastatic spread is limited and molecular mechanisms causing particular characteristics of metastasis are largely unknown. Herein, transcriptome-wide expression profiles of a unique cohort of 20 laser-resected pulmonary metastases (Mets) of 18 patients with clear-cell renal cell carcinoma (RCC) were analyzed to identify expression patterns associated with two important prognostic factors in RCC: the disease-free interval (DFI) after nephrectomy and the number of Mets per patient. Differentially expressed genes were identified by comparing early (DFI , 9 months) and late (DFI , 5 years) Mets, and Mets derived from patients with few (,8) and multiple (,16) Mets. Early and late Mets could be separated by the expression of genes involved in metastasis-associated processes, such as angiogenesis, cell migration and adhesion (e.g., PECAM1, KDR). Samples from patients with multiple Mets showed an elevated expression of genes associated with cell division and cell cycle (e.g., PBK, BIRC5, PTTG1) which indicates that a high number of Mets might result from an increased growth potential. Minimal sets of genes for the prediction of the DFI and the number of Mets per patient were identified. Microarray results were confirmed by quantitative PCR by including nine further pulmonary Mets of RCC. In summary, we showed that subgroups of Mets are distinguishable based on their expression profiles, which reflect the DFI and the number of Mets of a patient. To what extent the identified molecular factors contribute to the development of these characteristics of metastatic spread needs to be analyzed in further studies. © 2009 UICC [source]


Is my antibody-staining specific?

EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 12 2008
How to deal with pitfalls of immunohistochemistry
Abstract Immunohistochemistry is a sensitive and versatile method widely used to investigate the cyto- and chemoarchitecture of the brain. It is based on the high affinity and selectivity of antibodies for a single epitope. However, it is now recognized that the specificity of antibodies needs to be tested in control experiments to avoid false-positive results due to non-specific binding to tissue components or recognition of epitopes shared by several molecules. This ,Technical Spotlight' discusses other pitfalls, which are often overlooked, although they can strongly influence the outcome of immunohistochemical experiments. It also recapitulates the minimal set of information that should be provided in scientific publications to allow proper evaluation and replication of immunohistochemical experiments. In particular, tissue fixation and processing can have a strong impact on antigenicity by producing conformational changes to the epitopes, limiting their accessibility (epitope masking) or generating high non-specific background. These effects are illustrated for an immunoperoxidase staining experiment with three antibodies differing in susceptibility to fixation, using tissue from mice processed under identical conditions, except for slight variations in tissue fixation. In these examples, specific immunostaining can be abolished depending on fixation strength, or detected only after prolonged postfixation. As a consequence, antibody characterization in immunohistochemistry should include their susceptibility towards fixation and determination of the optimal conditions for their use. [source]


Kinetic Study of the Asymmetric Hydrogenation of Methyl Acetoacetate in the Presence of a Ruthenium Binaphthophosphepine Complex

ADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 1-2 2009
Eva Öchsner
Abstract The asymmetric hydrogenation of methyl acetoacetate (MAA) in methanol using dibromobis{(S)-4-phenyl-4,5-dihydro-3H -dinaphtho[2,1- c: 1,,2,- e]phosphepine}-ruthenium was studied in detail. For the determination of the reaction network, data from kinetic experiments were compared to different possible reaction networks using the kinetic software Presto Kinetics. The simulation was optimised to describe the reaction accurately with a minimal set of process parameters and reaction equations. For the best model the reaction orders, collision factors and activation energy of all reaction steps were determined. Additionally, the influence of reaction temperature and hydrogen pressure on the enantiomeric excess (ee) of the reaction was studied. It was found that high reaction temperatures and high hydrogen pressures result in increasing enantioselectivities. [source]


Guidelines for assessment of bone microstructure in rodents using micro,computed tomography

JOURNAL OF BONE AND MINERAL RESEARCH, Issue 7 2010
Mary L Bouxsein
Abstract Use of high-resolution micro,computed tomography (µCT) imaging to assess trabecular and cortical bone morphology has grown immensely. There are several commercially available µCT systems, each with different approaches to image acquisition, evaluation, and reporting of outcomes. This lack of consistency makes it difficult to interpret reported results and to compare findings across different studies. This article addresses this critical need for standardized terminology and consistent reporting of parameters related to image acquisition and analysis, and key outcome assessments, particularly with respect to ex vivo analysis of rodent specimens. Thus the guidelines herein provide recommendations regarding (1) standardized terminology and units, (2) information to be included in describing the methods for a given experiment, and (3) a minimal set of outcome variables that should be reported. Whereas the specific research objective will determine the experimental design, these guidelines are intended to ensure accurate and consistent reporting of µCT-derived bone morphometry and density measurements. In particular, the methods section for papers that present µCT-based outcomes must include details of the following scan aspects: (1) image acquisition, including the scanning medium, X-ray tube potential, and voxel size, as well as clear descriptions of the size and location of the volume of interest and the method used to delineate trabecular and cortical bone regions, and (2) image processing, including the algorithms used for image filtration and the approach used for image segmentation. Morphometric analyses should be based on 3D algorithms that do not rely on assumptions about the underlying structure whenever possible. When reporting µCT results, the minimal set of variables that should be used to describe trabecular bone morphometry includes bone volume fraction and trabecular number, thickness, and separation. The minimal set of variables that should be used to describe cortical bone morphometry includes total cross-sectional area, cortical bone area, cortical bone area fraction, and cortical thickness. Other variables also may be appropriate depending on the research question and technical quality of the scan. Standard nomenclature, outlined in this article, should be followed for reporting of results. © 2010 American Society for Bone and Mineral Research [source]


View planning and automated data acquisition for three-dimensional modeling of complex sites

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2009
Paul S. Blaer
Constructing highly detailed three-dimensional (3-D) models of large complex sites using range scanners can be a time-consuming manual process. One of the main drawbacks is determining where to place the scanner to obtain complete coverage of a site. We have developed a system for automatic view planning called VuePlan. When combined with our mobile robot, AVENUE, we have a system that is capable of modeling large-scale environments with minimal human intervention throughout both the planning and acquisition phases. The system proceeds in two distinct stages. In the initial phase, the system is given a two-dimensional site footprint with which it plans a minimal set of sufficient and properly constrained covering views. We then use a 3-D laser scanner to take scans at each of these views. When this planning system is combined with our mobile robot it automatically computes and executes a tour of these viewing locations and acquires them with the robot's onboard laser scanner. These initial scans serve as an approximate 3-D model of the site. The planning software then enters a second phase in which it updates this model by using a voxel-based occupancy procedure to plan the next best view (NBV). This NBV is acquired, and further NBVs are sequentially computed and acquired until an accurate and complete 3-D model is obtained. A simulator tool that we developed has allowed us to test our entire view planning algorithm on simulated sites. We have also successfully used our two-phase system to construct precise 3-D models of real-world sites located in New York City: Uris Hall on the campus of Columbia University and Fort Jay on Governors Island. © 2009 Wiley Periodicals, Inc. [source]


Structure solution of the basic decagonal Al,Co,Ni phase by the atomic surfaces modelling method

ACTA CRYSTALLOGRAPHICA SECTION B, Issue 1 2002
Antonio Cervellino
The atomic surfaces modelling technique has been used to solve the structure of the basic Ni-rich Al,Co,Ni decagonal phase. Formula Al70.6Co6.7Ni22.7, space group , five-dimensional unit-cell parameters: d1 = d4 = 4.752,(3),Å, d2 = d3 = 3.360,(2),Å, d5 = 8.1710,(2),Å; ,12 = ,34 = 69.295°, ,13 = ,24 = 45°, ,14 = 41.410°, ,23 = ,i5 = 90° (i = 1,4), V = 291.2,(7),Å5; Dx = 3.887,Mg,m,3. Refinement based on |F|; 2767 unique reflections (|F| > 0), 749 parameters, R = 0.17, wR = 0.06. Describing the structure of quasicrystals embedded in n -dimensional superspace in principle takes advantage of n -dimensional periodicity to select the minimal set of degrees of freedom for the structure. The method of modelling of the atomic surfaces yielded the first fully detailed structure solution of this phase. Comparison with numerous former, less accurate models confirms several features already derived, but adds a new essential insight of the structure and its complexity. The atoms fill the space forming recurrent structure motifs, which we will (generically) refer to as clusters. However, no unique cluster exists, although differences are small. Each cluster shows a high degree of structural disorder. This gives rise to a large configurational entropy, as much as expected in a phase which is stable at high temperature. On the other side, the cluster spatial arrangement is perfectly quasiperiodic. These considerations, corroborated by analysis of the structural relationship with neighbouring periodic phases, strongly suggest the existence of a non-local, long-range interaction term in the total energy which may be essential to the stability. [source]


Genetic parsimony: a factor in the evolution of complexity, order and emergence

BIOLOGICAL JOURNAL OF THE LINNEAN SOCIETY, Issue 2 2006
A. R. D. STEBBING
Two conjectures, drawn from Gregory Chaitin's Algorithmic Information Theory, are examined with respect to the relationship between an algorithm and its product; in particular his finding that, where an algorithm is minimal, its length provides a measure of the complexity of the product. Algorithmic complexity is considered from the perspective of the relationship between genotype and phenotype, which Chaitin suggests is analogous to other algorithm-product systems. The first conjecture is that the genome is a minimal set of algorithms for the phenotype. Evidence is presented for a factor, here termed ,genetic parsimony', which is thought to have helped minimize the growth of genome size during evolution. Species that depend on rapid replication, such as prokaryotes which are generally r -selected are more likely to have small genomes, while the K -strategists accumulate introns and have large genomes. The second conjecture is that genome size could provide a measure of organism complexity. A surrogate index for coding DNA is in agreement with an established phenotypic index (number of cell types), in exhibiting an evolutionary trend of increasing organism complexity over time. Evidence for genetic parsimony indicates that simplicity in coding has been selected, and is responsible for phenotypic order. It is proposed that order evolved because order in the phenotype can be encoded more economically than disorder. Thus order arises due to selection for genetic parsimony, as does the evolution of other ,emergent' properties. © 2006 The Linnean Society of London, Biological Journal of the Linnean Society, 2006, 88, 295,308. [source]


A Global Sensitivity Test for Evaluating Statistical Hypotheses with Nonidentifiable Models

BIOMETRICS, Issue 2 2010
D. Todem
Summary We consider the problem of evaluating a statistical hypothesis when some model characteristics are nonidentifiable from observed data. Such a scenario is common in meta-analysis for assessing publication bias and in longitudinal studies for evaluating a covariate effect when dropouts are likely to be nonignorable. One possible approach to this problem is to fix a minimal set of sensitivity parameters conditional upon which hypothesized parameters are identifiable. Here, we extend this idea and show how to evaluate the hypothesis of interest using an infimum statistic over the whole support of the sensitivity parameter. We characterize the limiting distribution of the statistic as a process in the sensitivity parameter, which involves a careful theoretical analysis of its behavior under model misspecification. In practice, we suggest a nonparametric bootstrap procedure to implement this infimum test as well as to construct confidence bands for simultaneous pointwise tests across all values of the sensitivity parameter, adjusting for multiple testing. The methodology's practical utility is illustrated in an analysis of a longitudinal psychiatric study. [source]


Ergodicity for the Navier-Stokes equation with degenerate random forcing: Finite-dimensional approximation

COMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 11 2001
Weinan E
We study Galerkin truncations of the two-dimensional Navier-Stokes equation under degenerate, large-scale, stochastic forcing. We identify the minimal set of modes that has to be forced in order for the system to be ergodic. Our results rely heavily on the structure of the nonlinearity. © 2001 John Wiley & Sons, Inc. [source]


RelACCS-FP: A Structural Minimalist Approach to Fingerprint Design

CHEMICAL BIOLOGY & DRUG DESIGN, Issue 5 2008
Ye Hu
The design and evaluation of structural key-type fingerprints is reported that consist of only 10,30 substructures isolated from randomly generated fragment populations of different classes of active compounds. To identify minimal sets of fragments that carry substantial compound class-specific information, fragment frequency calculations are applied to guide fingerprint generation. These compound class-directed and extremely small structural fingerprints push the design of so-called mini-fingerprints to the limit and are the shortest bit string fingerprints reported to date. For the application of relative frequency-based activity class characteristic substructure fingerprints, a bit density-dependent similarity metric is introduced that makes it possible to adjust similarity coefficients for individual compound classes and balance the recall of active compounds with database selection size. In similarity search trials, these small compound class-directed fingerprints enrich active compounds in relatively small database selection sets and approach or exceed the performance of widely used structural fingerprints of much larger size and higher complexity. [source]