Special Case (special + case)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Selected Abstracts

A Perspective on Achieving Equality in Mathematics for Fourth Grade Girls: A Special Case

Christine G. Renne
How can and do teachers create equal access within everyday classroom lessons and establish opportunities for girls to participate fully? What contexts contribute to equity? In contrast to classrooms where boys receive more attention, encouragement, and content-area instruction, Ms. Jeffreys conducts whole class lessons in her fourth grade classroom where girls participate equally and successfully with boys during mathematics. To ascertain what contributes to the equal participation, I use interactional analysis to closely examine two mathematics lessons. Part of Ms. Jeffreys' success lies in altering normative classroom discourse and in the assertive context created and sustained by the math, science, and technology magnet school setting. However, another layer of complexity is introduced: to teach her students at their instructional level, Ms. Jeffreys groups her students by their ability to pass timed multiplication tests. By instituting a form of tracking, Ms. Jeffreys also legitimates girls as knowledgeable, both socially and academically, by their membership in the top math group. While policy guidelines exhort teachers to provide equal access to curriculum, actually accomplishing a first step of access to participation in the routine day-to-day classroom talk remains extremely difficult. [source]

Gaussian approximation of exponential type orbitals based on B functions

Didier Pinchon
Abstract This work gives new, highly accurate optimized gaussian series expansions for the B functions used in molecular quantum mechanics. These functions are generally chosen because of their compact Fourier transform, following Shavitt. The inverse Laplace transform in the square root of the variable is used for Gauss quadrature in this work. Two procedures for obtaining accurate gaussian expansions have been compared for the required extended precision arithmetic. The first is based on Gaussian quadratures and the second on direct optimization. Both use the Maple computer algebra system. Numerical results are tabulated and compared with previous work. Special cases are found to agree before pushing the optimization technique further. The optimal gaussian expansions of B functions obtained in this work are available for reference. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source]

On the use of second-order descriptors to predict queueing behavior of MAPs

Allan T. Andersen
Abstract The contributions of this paper are the following: We derive a formula for the IDI (Index of Dispersion for Intervals) for the Markovian Arrival Process (MAP). We show that two-state MAPs with identical fundamental rate, IDI and IDC (Index of Dispersion for Counts), define interval stationary point processes that are stochastically equivalent; this is true for the time stationary point processes they define too. Special cases of the two-state MAP are frequently used as source models in the literature. The result shows that, fitting to the rate, IDC and IDI of a source completely determine the interval stationary and time stationary behavior of the two-state model. We give various illustrative numerical examples on the merits in predicting queueing behavior on the basis of first- and second-order descriptors by considering queueing behavior of MAPs with constant fundamental rate and IDC, respectively, constant fundamental rate and IDI. Disturbing results are presented on how different the queueing behavior can be with these descriptors fixed. Even MAPs with NO correlations in the counting process, i.e., IDC(t) = 1 are shown to have very different queueing behavior. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 391,409, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10015 [source]

The Likelihood Ratio Test for the Rank of a Cointegration Submatrix,

Paolo Paruolo
Abstract This paper proposes a likelihood ratio test for rank deficiency of a submatrix of the cointegrating matrix. Special cases of the test include the one of invalid normalization in systems of cointegrating equations, the feasibility of permanent,transitory decompositions and of subhypotheses related to neutrality and long-run Granger noncausality. The proposed test has a chi-squared limit distribution and indicates the validity of the normalization with probability one in the limit, for valid normalizations. The asymptotic properties of several derived estimators of the rank are also discussed. It is found that a testing procedure that starts from the hypothesis of minimal rank is preferable. [source]


Nguyen Van Quang
Summary The condition of the strong law of large numbers is obtained for sequences of random elements in type p Banach spaces that are blockwise orthogonal. The current work extends a result of Chobanyan & Mandrekar (2000)[On Kolmogorov SLLN under rearrangements for orthogonal random variables in a B -space. J. Theoret. Probab. 13, 135,139.] Special cases of the main results are presented as corollaries, and illustrative examples are provided. [source]

Entwurf und überschlägige Berechnung von kreiszylindrischen Schalentragwerken

Herbert Hotzler Dr.-Ing.
Nach einer Definition der Schalen und Faltwerke wird das Tragverhalten kreiszylindrischer Schalentragwerke erläutert. Für eine formtreue Verformung, bei der sich der Querschnitt eines Schalentragwerkes bei lotrechter Belastung nur lotrecht durchbiegt und sich die Normalspannungen ,x über die Querschnittshöhe fast linear wie bei einem Träger verteilen, werden Entwurfshinweise gegeben und einfache Formeln zur Berechnung der Spannungen entwickelt. Hiermit lassen sich PC-Berechnungen nach der FE-Methode einschätzen und überprüfen. Auf die Sonderfälle Temperatur, halbseitige Belastung und Einzellasten wird eingegangen und das Beulen ausführlich erläutert. Die Näherungsberechnung wird an einem Beispiel gezeigt und mit den Ergebnissen einer genauen Berechnung verglichen. Design and Approximate Calculation of Circular Cylindrical Shell Structures At first a definition of shells and folded plates is given. Then the supporting behaviour of circular cylindrical shell-structures is demonstrated. Design advices and simple formulas are given for a so-called unchangeable deformation of the cross-section on which the cross-section deforms only vertically due to vertically loadings and on which the normal stresses ,x are nearly linearely distributed over the high of the cross-section. PC-calculations with FE-methods can be estimated and tested by the given simple formulas. Special cases like temperature, half-side loadings and pointloadings are discussed. The buckling is explained in details. An example of an approximate calculation is demonstrated and the results are compared with an exact calculation. [source]

Motional smearing of electrically recovered couplings measured from multipulse transients

Scott A. Riley
Abstract The measurement of residual dipolar and quadrupolar coupling constants in the liquid phase by using an electric field to destroy the isotropic nature of molecular tumbling is complicated by charge-induced turbulent motion. In many cases this motion is due to charge injection at electrode surfaces, an effect that leads to an apparent removal of electrically recovered anisotropic spectral splittings when measured from a spin-echo envelope modulation produced by a train of radio frequency (rf) pulses. To understand this averaging, the effect of quadrupolar couplings and enhanced molecular diffusion on free-induction, spin-echo, and Carr,Purcell signals is analytically determined in the special case of homogeneous rf pulses. Additional signal damping due to rf inhomogeneity and coupling constant heterogeneity is determined by numerically extending the kernel formalism introduced by Herzog and Hahn to understand spin diffusion in solids. Finally, the merit of the numerical approach is tested by comparison with analytical results for homogeneous rf pulses and experimental results for perdeuterated nitrobenzene involving inhomogeneous rf pulses and coupling heterogeneity. © 2001 John Wiley & Sons, Inc. Concepts Magn Reson 13: 171,189, 2001 [source]

Data partitioning-based parallel irregular reductions

Eladio Gutiérrez
Abstract Different parallelization methods for irregular reductions on shared memory multiprocessors have been proposed in the literature in recent years. We have classified all these methods and analyzed them in terms of a set of properties: data locality, memory overhead, exploited parallelism, and workload balancing. In this paper we propose several techniques to increase the amount of exploited parallelism and to introduce load balancing into an important class of these methods. Regarding parallelism, the proposed solution is based on the partial expansion of the reduction array. Load balancing is discussed in terms of two techniques. The first technique is a generic one, as it deals with any kind of load imbalance present in the problem domain. The second technique handles a special case of load imbalance which occurs whenever a large number of write operations are concentrated on small regions of the reduction arrays. Efficient implementations of the proposed optimizing solutions for a particular method are presented, experimentally tested on static and dynamic kernel codes, and compared with other parallel reduction methods. Copyright © 2004 John Wiley & Sons, Ltd. [source]

Dr Harold Frederick Shipman: An enigma

John Gunn
Dr. Shipman was the worst known serial killer in British history, at least in terms of numbers of victims, and possibly the worst in world history, if politicians are excluded. He killed at least 215 patients and may have begun his murderous career at the age of 25, within a year of finishing his medical training. His case has had a profound impact on the practice of medicine in the United Kingdom. Was he a special case? What were the origins of this behaviour? Could the behaviour have been prevented? It is necessary to learn what we can from a few personal facts and largely circumstantial evidence. He withheld himself from any useful clinical investigation or treatment once he had been taken into custody. Could he have been treated at any stage? Copyright © 2010 John Wiley & Sons, Ltd. [source]

In vivo and in vitro analysis of the vasculogenic potential of avian proepicardial and epicardial cells,

Juan A. Guadix
Abstract Coronary vessel formation is a special case in the context of embryonic vascular development. A major part of the coronary cellular precursors (endothelial, smooth muscle, and fibroblastic cells) derive from the proepicardium and the epicardium in what can be regarded as a late event of angioblastic and smooth muscle cell differentiation. Thus, coronary morphogenesis is dependent on the epithelial,mesenchymal transformation of the proepicardium and the epicardium. In this study, we present several novel observations about the process of coronary vasculogenesis in avian embryos, namely: (1) The proepicardium displays a high vasculogenic potential, both in vivo (as shown by heterotopic transplants) and in vitro, which is modulated by vascular endothelial growth factor (VEGF) and basic fibroblast growth factor signals; (2) Proepicardial and epicardial cells co-express receptors for platelet-derived growth factor-BB and VEGF; (3) Coronary angioblasts (found all through the epicardial, subepicardial, and compact myocardial layers) express the Wilms' tumor associated transcription factor and the retinoic acid-synthesizing enzyme retinaldehyde-dehydrogenase-2, two markers of the coelomic epithelium involved in coronary endothelium development. All these results contribute to the development of our knowledge on the vascular potential of proepicardial/epicardial cells, the existent interrelationships between the differentiating coronary cell lineages, and the molecular mechanisms involved in the regulation of coronary morphogenesis. Developmental Dynamics 235:1014,1026, 2006. © 2006 Wiley-Liss, Inc. [source]

Regional analysis of bedrock stream long profiles: evaluation of Hack's SL form, and formulation and assessment of an alternative (the DS form)

Geoff Goldrick
Abstract The equilibrium form of the fluvial long profile has been used to elucidate a wide range of aspects of landscape history including tectonic activity in tectonic collision zones, and in continental margin and other intraplate settings, as well as other base-level changes such as due to sealevel fluctuations. The Hack SL form of the long profile, which describes a straight line on a log,normal plot of elevation (normal) versus distance (logarithmic), is the equilibrium long profile form that has been most widely used in such studies; slope,area analysis has also been used in recent years. We show that the SL form is a special case of a more general form of the equilibrium long profile (here called the DS form) that can be derived from the power relationship between stream discharge and downstream distance, and the dependence of stream incision on stream power. The DS form provides a better fit than the SL form to river long profiles in an intraplate setting in southeastern Australia experiencing low rates of denudation and mild surface uplift. We conclude that, if an a priori form of the long profile is to be used for investigations of regional landscape history, the DS form is preferable. In particular, the DS form in principle enables equilibrium steepening due to an increase in channel substrate lithological resistance (parallel shift in the DS plot) to be distinguished from disequilibrium steepening due to long profile rejuvenation (disordered outliers on the DS plot). Slope,area analysis and the slope,distance (DS) approach outlined here are complementary approaches, reflecting the close relationship between downstream distance and downstream catchment area. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Pounding of structures modelled as non-linear impacts of two oscillators

K. T. Chau
Abstract A new formulation is proposed to model pounding between two adjacent structures, with natural periods T1 and T2 and damping ratios ,1 and ,2 under harmonic earthquake excitation, as non-linear Hertzian impact between two single-degree-of-freedom oscillators. For the case of rigid impacts, a special case of our analytical solution has been given by Davis (,Pounding of buildings modelled by an impact oscillator' Earthquake Engineering and Structural Dynamics, 1992; 21:253,274) for an oscillator pounding on a stationary barrier. Our analytical predictions for rigid impacts agree qualitatively with our numerical simulations for non-rigid impacts. When the difference in natural periods between the two oscillators increases, the impact velocity also increases drastically. The impact velocity spectrum is, however, relatively insensitive to the standoff distance. The maximum relative impact velocity of the coupled system can occur at an excitation period Tn* which is either between those of the two oscillators or less than both of them, depending on the ratios T1/T2 and ,1/,2. Although the pounding force between two oscillators has been primarily modelled by the Hertz contact law, parametric studies show that the maximum relative impact velocity is not very sensitive to changes in the contact parameters. Copyright © 2001 John Wiley & Sons, Ltd. [source]

A niche for neutrality

Peter B. Adler
Abstract Ecologists now recognize that controversy over the relative importance of niches and neutrality cannot be resolved by analyzing species abundance patterns. Here, we use classical coexistence theory to reframe the debate in terms of stabilizing mechanisms (niches) and fitness equivalence (neutrality). The neutral model is a special case where stabilizing mechanisms are absent and species have equivalent fitness. Instead of asking whether niches or neutral processes structure communities, we advocate determining the degree to which observed diversity reflects strong stabilizing mechanisms overcoming large fitness differences or weak stabilization operating on species of similar fitness. To answer this question, we propose combining data on per capita growth rates with models to: (i) quantify the strength of stabilizing processes; (ii) quantify fitness inequality and compare it with stabilization; and (iii) manipulate frequency dependence in growth to test the consequences of stabilization and fitness equivalence for coexistence. [source]

A Bayesian model for estimating the effects of drug use when drug use may be under-reported

ADDICTION, Issue 11 2009
Garnett P. McMillan
ABSTRACT Aims We present a statistical model for evaluating the effects of substance use when substance use might be under-reported. The model is a special case of the Bayesian formulation of the ,classical' measurement error model, requiring that the analyst quantify prior beliefs about rates of under-reporting and the true prevalence of substance use in the study population. Design Prospective study. Setting A diversion program for youths on probation for drug-related crimes. Participants A total of 257 youths at risk for re-incarceration. Measurements The effects of true cocaine use on recidivism risks while accounting for possible under-reporting. Findings The proposed model showed a 60% lower mean time to re-incarceration among actual cocaine users. This effect size is about 75% larger than that estimated in the analysis that relies only on self-reported cocaine use. Sensitivity analysis comparing different prior beliefs about prevalence of cocaine use and rates of under-reporting universally indicate larger effects than the analysis that assumes that everyone tells the truth about their drug use. Conclusion The proposed Bayesian model allows one to estimate the effect of actual drug use on study outcome measures. [source]

Structure,activity relationships for acute and chronic toxicity of alcohol ether sulfates

Scott D. Dyer
Abstract Acoholethersulfates(AES)areanionicsurfactantscommonlyusedinconsumerproducts. Commercial AES alkyl chain lengths range from C12 to C18, with ethoxylate (EO) units ranging from 1 to 5. Alkyl sulfate is a special case of AES with no EO units. Acute and chronic toxicity tests using Ceriodaphnia dubia via a novel flowthrough method were conducted with 18 AES compounds to derive SARs for effects assessment. In general, acute toxicity (48-h LC50) increased with increased alkyl carbon chain length and decreased with increased numbers of EO units. Parabolic structure,chronic (7-d) toxicity relationships were observed for endpoints such as the no-observed-effect concentration, lowest-observed-effect concentration, maximum acceptable toxicant concentration, EC20, and EC50. A linear relationship of the fractional negative-charged surface area (FNSA-3) with acute toxicity was also determined. FNSA-3 refers primarily to the polar head group of AES and secondarily to the alkyl chain. Seventy percent of the variance in the chronic data was addressed with a quadratic equation relating toxicity to alkyl chain length and EO units. Alternatively, the molecular descriptors FNSA-3 and S3P (3,p, which is the simple, third-order path index) were also found to address most of the data nonlinearity. A chronic test conducted with a mixture of four AES components indicated additivity, leading to the support of the performance of an effects assessment of AES as a mixture. [source]

Bayesian uncertainty assessment in multicompartment deterministic simulation models for environmental risk assessment

Samantha C. Bates
Abstract We use a special case of Bayesian melding to make inference from deterministic models while accounting for uncertainty in the inputs to the model. The method uses all available information, based on both data and expert knowledge, and extends current methods of ,uncertainty analysis' by updating models using available data. We extend the methodology for use with sequential multicompartment models. We present an application of these methods to deterministic models for concentration of polychlorinated biphenyl (PCB) in soil and vegetables. The results are posterior distributions of concentration in soil and vegetables which account for all available evidence and uncertainty. Model uncertainty is not considered. Copyright © 2003 John Wiley & Sons, Ltd. [source]

Constrained multivariate trend analysis applied to water quality variables

D. M. Cooper
Abstract Constrained multivariate regression analysis is used to model trends and seasonal effects in time series measurements of water quality variables. The constraint used ensures that when identifying trends the scientifically important charge balance of model-fitted concentrations is maintained, while accounting for between variable dependencies. The analysis is a special case of linear reduction of dimensionality which preserves the integrity of a subset of the original variables, while allowing the remainder to be identified as linear combinations of this subset. The technique is applied to water quality measurements made at the outflow from Loch Grannoch, an acid-sensitive loch in Scotland. A reduction in marine ion concentrations is observed in water samples collected four times a year over the period 1988,2000. This is identified with long term variability in the marine component in rainfall. Separation of the non-marine component of the solute load shows a reduction in non-marine sulphate and calcium concentrations, and an increase in the non-marine sodium concentration. There is no significant change in either alkalinity or acid neutralizing capacity over the period. The reduction in non-marine sulphate is consistent with reductions in atmospheric inputs of sulphate. However, the reduction in sulphate has not been accompanied by a reduction in the acidity of water samples from Loch Grannoch, but with a reduction in calcium concentration and an apparent increase in organic acids, as evidenced by increased dissolved organic carbon concentrations, with possible increases in nitrate and non-marine sodium concentrations. Copyright © 2002 John Wiley & Sons, Ltd. [source]

Intended and unintended consequences of internal motivation to behave nonprejudiced: The case of benevolent discrimination,

Jennifer Fehr
Internal motivation to behave nonprejudiced reduces prejudice. The present research looks at the impact of internal motivation in a special case of prejudiced behavior, namely benevolent discrimination. It was hypothesized that internal motivation does not reduce, but rather increases benevolent discrimination as long as individuals are not aware of its negative consequences. This is because of the positive intention required to show benevolent discrimination. Once the negative consequences have been made salient, internal motivation will facilitate self-criticism of one's own benevolently discriminating behavior, which will be reflected in a more critical reappraisal of previous benevolently discriminating behavior. The predictions were supported in three studies. Study 1 analyzed the impact of internal motivation on benevolent discrimination. Study 2 and 3 analyzed the effect of internal motivation on the critical reappraisal of one's own benevolently discriminating behavior. The implications for the regulation of benevolent discrimination in the broader context of social discrimination are discussed. Copyright © 2009 John Wiley & Sons, Ltd. [source]

European Parliament and Executive Federalism: Approaching a Parliament in a Semi-Parliamentary Democracy

Philipp Dann
This paper proposes an understanding of the European Parliament not along theories about what the EU should become, but what it is and surely will continue to be, that is a very distinct federal structure. The European Parliament is a parliament in an executive federalism,with far-reaching consequences for its form and functions. After outlining the characteristics of this federal structure, these consequences will be demonstrated by analysing the European Parliament in contrast with two ideal types of parliaments: the working parliament, separated from the executive branch and centred around strong committees (like the US Congress), and the debating parliament, characterised by the fusion of parliamentary majority and government as well as plenary debates (like the British House of Commons). Dwelling thus on a comparison to a legislature in a non-parliamentary federal system, like the US Congress, this paper argues that the European Parliament might best be understood as a special case of a working parliament. Finally, it will be proposed to consider the influence of executive federalism not only as fundamentally shaping the European Parliament but also as rendering the EU generally a semi-parliamentary democracy. [source]

A new space-vector transformation for four-conductor systems

A. Ferrero
Linear transformations are often employed in the study of three-phase systems, since they allow for simplifying the equations describing the system behaviour. Among them, the Fortescue, Clarke and Park transformations are very widely employed. In particular, the last one leads to the formulation of the Space-Vector theory, that is currently employed in the fields of AC machine theory, power definitions and active filtering. The greatest limitation in the use of these transformations is that they are restricted to systems with only three conductors. However, four-conductor systems are often present in the common practice of the electric systems and the application of the Space- Vector theory to such systems is not as straightforward as in the case of the three-conductor systems: a zero-sequence quantity must be considered separately. This paper proposes a linear transformation that extends the properties of the Space- Vector theory to four-conductor systems, and includes the Park transformation as a special case. The mathematical derivation of this new transformation is reported, and its application is discussed by means of some examples. [source]

CGU-frame-based representations and their connection with Reed,Solomon and DCT/DST coding schemes

Fatma Abdelkefi
We investigate the use of overcomplete frame representations to correct errors occurring over burst-based transmission channels or channels leading to isolated errors. We show that when the overcomplete signal representation is based on a class of frames, called cyclic geometrically uniform (CGU) finite frames, the family of frames containing finite harmonic frames (both in and ), this representation becomes equivalent to a Reed--Solomon (RS) coding scheme. Hence, introducing an RS decoding procedure at the receiver, leads to remove the errors introduced by the transmission channel and consequently results in a quasi-perfect reconstructed signal. The advantage of this approach is to exploit the RS coding scheme without using it explicitly at the transmitter, which would lead to a robust and low complexity transmission. Furthermore, we prove that the discrete cosine transform (DCT) coding is a special case of CGU-frame-based representations and this property holds also true for the discrete sine transform (DST) coding scheme. Simulation results are presented to confirm our claims. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Performance analysis of a generic system model for uncoded IDMA using serial and parallel interference cancellation,

Oliver Nagy
This paper shows how to accurately describe a fully synchronised interleave division multiple access (IDMA) scheme without channel coding by a matrix model. This model allows the derivation of the optimal detector and provides additional insights into the IDMA principle, and we show that the matrices are structured and sparse. We use BER and EXIT charts to study the performance of parallel and serial interference cancellation schemes and demonstrate that the latter converges faster and is independent of the scrambling code. Any bit interleaved DS-CDMA system can be viewed as a special case of IDMA, and an IDMA receiver can therefore be used to detect DS-CDMA signals. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Performance of collaborative codes in CSMA/CD environment

F. Gebali
A new medium access control scheme is proposed for implementing collaborative codes in a system using carrier sense multiple access with collision detection protocol (CC-CSMA/CD). We also propose a new backoff algorithm which is simple to implement and to analyse. A discrete-time Markov chain analytical model is developed for CC-CSMA/CD. The resulting model describes the regular CSMA/CD as a special case. Protocol performance measures were studied such as throughput, packet acceptance probability, average packet delay and channel utilisation. It is found that CC-CSMA/CD offers improvements over a system that uses CSMA/CD in terms of throughput, packet acceptance probability, delay and channel utilisation. Copyright © 2006 AEIT. [source]


EVOLUTION, Issue 2 2010
Nathan D. Jackson
Rivers can act as both islands of mesic refugia for terrestrial organisms during times of aridification and barriers to gene flow, though evidence for long-term isolation by rivers is mixed. Understanding the extent to which riverine barrier effects can be heightened for populations trapped in mesic refugia can help explain maintenance and generation of diversity in the face of Pleistocene climate change. Herein, we implement phylogenetic and population genetic approaches to investigate the phylogeographic structure and history of the ground skink, Scincella lateralis, using mtDNA and eight nuclear loci. We then test several predictions of a river,refugia model of diversification. We recover 14 well-resolved mtDNA lineages distributed east,west along the Gulf Coast with a subset of lineages extending northward. In contrast, ncDNA exhibits limited phylogenetic structure or congruence among loci. However, multilocus population structure is broadly congruent with mtDNA patterns and suggests that deep coalescence rather than differential gene flow is responsible for mtDNA,ncDNA discordance. The observed patterns suggest that most lineages originated from population vicariance due to riverine barriers strengthened during the Plio,Pleistocene by a climate-induced coastal distribution. Diversification due to rivers is likely a special case, contingent upon other environmental or biological factors that reinforce riverine barrier effects. [source]

Structural disorder in amyloid fibrils: its implication in dynamic interactions of proteins

FEBS JOURNAL, Issue 19 2009
P. Tompa
Proteins are occasionally converted from their normal soluble state to highly ordered fibrillar aggregates (amyloids), which give rise to pathological conditions that range from neurodegenerative disorders to systemic amyloidoses. Recent methodological advances in solid-state NMR and EPR spectroscopy have enabled determination of the 3D structure of several amyloids at residue-level resolution. The general picture that emerges is that amyloids constitute parallel , sheets, in which individual polypeptide chains run roughly perpendicular to the major axis of the fibril and are stacked in-register. Thus, the unifying theme of amyloid formation is the structural transition from an initial globular or intrinsically disordered state to a highly ordered regular form. In this minireview, we show that this description is somewhat oversimplified, because part of the polypeptide chain in the amyloid remains intrinsically disordered and does not become part of the ordered core. As demonstrated through examples such as the amyloids of ,-synuclein and A, peptide and the yeast prions HET-s and Ure2p, these disordered segments are depleted in amino acids NQFYV and are enriched in DEKP. They are also significantly more charged and have a higher predicted disordered value than segments in the cross-, core. We suggest that structural disorder in amyloid is a special case of ,fuzziness', i.e. disorder in the bound state that may serve different functions, such as the accommodation of destabilizing residues and the mediation of secondary interactions between protofibrils. [source]

Free fermions violate the area law for entanglement entropy

R.C. Helling
Abstract We show that the entanglement entropy associated to a region grows faster than the area of its boundary surface. This is done by proving a special case of a conjecture due to Widom that yields a surprisingly simple expression for the leading behaviour of the entanglement entropy. [source]

From unambiguous quantum state discrimination to quantum state filtering

J.A. Bergou
Unambiguous discrimination among nonorthogonal but linearly independent quantum states is possible with a certain probability of success. Here, we consider a new variant of that problem. Instead of discriminating among all of the N different states, we now ask for less. We want to unambiguously assign the state to one of two complementary subsets of the set of N given non-orthogonal quantum states, each occurring with given a priori probabilities. We refer to the special case when one subset contains only one state and the other contains the remaining N -1 states as unambiguous quantum state filtering. We present an optimal analytical solution for the special case of N=3, and discuss the optimal strategy to unambiguously distinguish |,1, from the set {|,2,,|,3,}. For unambiguous filtering the subsets need not be linearly independent. We briefly discuss how to construct generalized interferometers (multiports) which provide a fully linear optical implementation of the optimal strategy. [source]

Informative-Transmission Disequilibrium Test (i-TDT): combined linkage and association mapping that includes unaffected offspring as well as affected offspring

Chao-Yu Guo
Abstract To date, there is no test valid for the composite null hypothesis of no linkage or no association that utilizes transmission information from heterozygous parents to their unaffected offspring as well as the affected offspring from ascertained nuclear families. Since the unaffected siblings also provide information about linkage and association, we introduce a new strategy called the informative-transmission disequilibrium test (i-TDT), which uses transmission information from heterozygous parents to all of the affected and unaffected offspring in ascertained nuclear families and provides a valid chi-square test for both linkage and association. The i-TDT can be used in various study designs and can accommodate all types of independent nuclear families with at least one affected offspring. We show that the transmission/disequilibrium test (TDT) (Spielman et al. [1993] Am. J. Hum. Genet. 52:506,516) is a special case of the i-TDT, if the study sample contains only case-parent trios. If the sample contains only affected and unaffected offspring without parental genotypes, the i-TDT is equivalent to the sibship disequilibrium test (SDT) (Horvath and Laird [1998] Am. J. Hum. Genet. 63:1886,1897. In addition, the test statistic of i-TDT is simple, explicit and can be implemented easily without intensive computing. Through computer simulations, we demonstrate that power of the i-TDT can be higher in many circumstances compared to a method that uses affected offspring only. Applying the i-TDT to the Framingham Heart Study data, we found that the apolipoprotein E (APOE) gene is significantly linked and associated with cross-sectional measures and longitudinal changes in total cholesterol. Genet. Epidemiol. © 2006 Wiley-Liss, Inc. [source]

Analysis of multilocus models of association

B. Devlin
Abstract It is increasingly recognized that multiple genetic variants, within the same or different genes, combine to affect liability for many common diseases. Indeed, the variants may interact among themselves and with environmental factors. Thus realistic genetic/statistical models can include an extremely large number of parameters, and it is by no means obvious how to find the variants contributing to liability. For models of multiple candidate genes and their interactions, we prove that statistical inference can be based on controlling the false discovery rate (FDR), which is defined as the expected number of false rejections divided by the number of rejections. Controlling the FDR automatically controls the overall error rate in the special case that all the null hypotheses are true. So do more standard methods such as Bonferroni correction. However, when some null hypotheses are false, the goals of Bonferroni and FDR differ, and FDR will have better power. Model selection procedures, such as forward stepwise regression, are often used to choose important predictors for complex models. By analysis of simulations of such models, we compare a computationally efficient form of forward stepwise regression against the FDR methods. We show that model selection includes numerous genetic variants having no impact on the trait, whereas FDR maintains a false-positive rate very close to the nominal rate. With good control over false positives and better power than Bonferroni, the FDR-based methods we introduce present a viable means of evaluating complex, multivariate genetic models. Naturally, as for any method seeking to explore complex genetic models, the power of the methods is limited by sample size and model complexity. Genet Epidemiol 25:36,47, 2003. © 2003 Wiley-Liss, Inc. [source]

A Family of Location Models for Multiple-Type Discrete Dispersion

Kevin M. Curtin
One of the defining objectives in location science is to maximize dispersion. Facilities can be dispersed for a wide variety of purposes, including attempts to optimize competitive market advantage, disperse negative impacts, and optimize security. With one exception, all of the extant dispersion models consider only one type of facility, and ignore problems where multiple types of facilities must be located. We provide examples where multiple-type dispersion is appropriate and based on this develop a general class of facility location problems that optimize multiple-type dispersion. This family of models expands on the previously formulated definitions of dispersion for single types of facilities, by allowing the interactions among different types of facilities to determine the extent to which they will be spatially dispersed. We provide a set of integer-linear programming formulations for the principal models of this class and suggest a methodology for intelligent constraint elimination. We also present results of solving a range of multiple-type dispersion problems optimally and demonstrate that only the smallest versions of such problems can be solved in a reasonable amount of computer time using general-purpose optimization software. We conclude that the family of multiple-type dispersion models provides a more comprehensive, flexible, and realistic framework for locating facilities where weighted distances should be maximized, when compared with the special case of locating only a single type of facility. [source]