Equivalently

Distribution by Scientific Domains
Distribution within Mathematics and Statistics


Selected Abstracts


Degrees of d. c. e. reals

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 4-5 2004
Rod Downey
Abstract A real , is called a c. e. real if it is the halting probability of a prefix free Turing machine. Equivalently, , is c. e. if it is left computable in the sense that L(,) = {q , , : q , ,} is a computably enumerable set. The natural field formed by the c. e. reals turns out to be the field formed by the collection of the d. c. e. reals, which are of the form ,,,, where , and , are c. e. reals. While c. e. reals can only be found in the c. e. degrees, Zheng has proven that there are ,02 degrees that are not even n -c. e. for any n and yet contain d. c. e. reals, where a degree is n -c. e. if it contains an n -c. e. set. In this paper we will prove that every , -c. e. degree contains a d. c. e. real, but there are , + 1-c. e. degrees and, hence ,02 degrees, containing no d. c. e. real. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


On ,-biased generators in NC0

RANDOM STRUCTURES AND ALGORITHMS, Issue 1 2006
Elchanan Mossel
Cryan and Miltersen (Proceedings of the 26th Mathematical Foundations of Computer Science, 2001, pp. 272,284) recently considered the question of whether there can be a pseudorandom generator in NC0, that is, a pseudorandom generator that maps n -bit strings to m -bit strings such that every bit of the output depends on a constant number k of bits of the seed. They show that for k = 3, if m , 4n + 1, there is a distinguisher; in fact, they show that in this case it is possible to break the generator with a linear test, that is, there is a subset of bits of the output whose XOR has a noticeable bias. They leave the question open for k , 4. In fact, they ask whether every NC0 generator can be broken by a statistical test that simply XORs some bits of the input. Equivalently, is it the case that no NC0 generator can sample an ,-biased space with negligible ,? We give a generator for k = 5 that maps n bits into cn bits, so that every bit of the output depends on 5 bits of the seed, and the XOR of every subset of the bits of the output has bias 2. For large values of k, we construct generators that map n bits to bits such that every XOR of outputs has bias . We also present a polynomial-time distinguisher for k = 4,m , 24n having constant distinguishing probability. For large values of k we show that a linear distinguisher with a constant distinguishing probability exists once m , ,(2kn,k/2,). Finally, we consider a variant of the problem where each of the output bits is a degree k polynomial in the inputs. We show there exists a degree k = 2 pseudorandom generator for which the XOR of every subset of the outputs has bias 2,,(n) and which maps n bits to ,(n2) bits. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source]


Effects of endothelin-1 on portal-systemic collaterals of common bile duct-ligated cirrhotic rats

EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 4 2004
C.-C. Chan
Abstract Background/Aims, Endothelin-1 (ET-1) may induce intrahepatic vasoconstriction and consequently increase portal pressure. Endothelin-1 has been shown to exert a direct vasoconstrictive effect on the collateral vessels in partially portal vein-ligated rats with a high degree of portal-systemic shunting. This study investigated the collateral vascular responses to ET-1, the receptors in mediation and the regulation of ET-1 action by nitric oxide and prostaglandin in cirrhotic rats with a relatively low degree of portal-systemic shunting. Methods, The portal-systemic collaterals of common bile duct-ligated (BDL) cirrhotic rats were tested by in situ perfusion. The concentration-response curves of collaterals to graded concentrations of ET-1 (10,10,10,7 m) with or without BQ-123 (ETA receptor antagonist, 2 × 10,6 m), BQ-788 (ETB receptor antagonist, 10,7 m) or both were recorded. In addition, the collateral responses to ET-1 with preincubation of N, -nitro-L-arginine (NNA, 10,4 M), indomethacin (INDO, 10,5 M) or in combination were assessed. Results, Endothelin-1 significantly increased the perfusion pressures of portal-systemic collaterals. The ET-1-induced constrictive effects were inhibited by BQ-123 or BQ-123 plus BQ-788 but not by BQ-788 alone. The inhibitory effect was greater in the combination group. Pretreatment of NNA or NNA plus INDO equivalently enhanced the response of ET-1 while pretreatment of INDO alone exerted no effect. Conclusion, Endothelin-1 has a direct vasoconstrictive effect on the collaterals of BDL cirrhotic rats, mainly mediated by ETA receptor. Endogenous nitric oxide may play an important role in modulating the effects of ET-1 in the portal-systemic collaterals of BDL cirrhotic rats. [source]


Influences of the Process Chain on the Fatigue Behavior of Samples with Tension Screw Geometry,

ADVANCED ENGINEERING MATERIALS, Issue 4 2010
Marcus Klein
To analyze the influence of the material batch, the structure of the manufacturing process chain, and the process parameters, four different material batches of the quenched and tempered steel SAE 4140 were used to manufacture samples with tension screw geometry. Five different, manufacturing process chains, consisting of the process steps heat treatment, turning, and grinding, were applied. After selected process steps, light and SEM micrographs as well as fatigue experiments were performed. The process itself as well as the process parameters influences the properties of the surface layers and the fatigue behavior in a characteristic manner. For example the variation of the feed rate and cutting speed in the hard-turning process leads to significantly different mechanical properties of the surface layers and residual stress states, which could be correlated with the fatigue behavior. The cyclic deformation behavior of the investigated components can be benchmarked equivalently with stress,strain hysteresis as well as high precision temperature and electrical resistance measurements. The temperature and electrical resistance measurements are suitable for component applications and provide an enormous advantage of information about the fatigue behavior. The temperature changes of the failed areas of the samples with tension screw geometry were significantly higher, a reliable identification of endangered areas is thereby possible. A new test procedure, developed at the Institute of Materials Science and Engineering of the University of Kaiserslautern, with inserted load-free-states during constant amplitude loading, provides the opportunity to detect proceeding fatigue damage in components during inspections. [source]


Reducing fish losses to cormorants using artificial fish refuges: an experimental study

FISHERIES MANAGEMENT & ECOLOGY, Issue 3 2008
I. RUSSELL
Abstract, The hypothesis that the introduction of artificial refuges might provide protection for fish and reduce the level of cormorant predation was tested in two, paired-pond, cross-over trials during the winters of 2003 and 2004, using a ,refuge' pond and an adjacent equivalently stocked ,control' pond. There were 77% fewer cormorant visits to the refuge pond than the control pond, on average. There was also a 67% fall in the mean mass of fish consumed per cormorant visit and 79% less fish mass lost in the refuge pond. The results are discussed in the context of interactions between cormorants and fish and the potential use of the tool in fisheries management. [source]


Circle hooks, ,J' hooks and drop-back time: a hook performance study of the south Florida recreational live-bait fishery for sailfish, Istiophorus platypterus

FISHERIES MANAGEMENT & ECOLOGY, Issue 2 2007
E. D. PRINCE
Abstract, This study evaluates the performance of two types of non-offset circle hooks (traditional and non-traditional) and a similar-sized ,J' hook commonly used in the south Florida recreational live-bait fishery for Atlantic sailfish, Istiophorus platypterus (Shaw). A total of 766 sailfish were caught off south Florida (Jupiter to Key West, FL, USA) to assess hook performance and drop-back time, which is the interval between the fish's initial strike and exertion of pressure by the fisher to engage the hook. Four drop-back intervals were examined (0,5, 6,10, 11,15 and >15 s), and hook performance was assessed in terms of proportions of successful catch, undesirable hook locations, bleeding events and undesirable release condition associated with physical hook damage and trauma. In terms of hook performance, the traditionally-shaped circle hook had the greatest conservation benefit for survival after release. In addition, this was the only hook type tested that performed well during each drop-back interval for all performance metrics. Conversely, ,J' hooks resulted in higher proportions of undesirable hook locations (as much as twofold), bleeding and fish released in undesirable condition, particularly during long drop-back intervals. Non-traditional circle hooks had performance results intermediate to the other hook types, but also had the worst performance relative to undesirable release condition during the first two drop-back intervals. Choice of hook type and drop-back interval can significantly change hook wounding, and different models of non-offset circle hooks should not be assumed to perform equivalently. [source]


String theory: exact solutions, marginal deformations and hyperbolic spaces

FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 2 2007
D. Orlando
Abstract This thesis is almost entirely devoted to studying string theory backgrounds characterized by simple geometrical and integrability properties. The archetype of this type of system is given by Wess-Zumino-Witten models, describing string propagation in a group manifold or, equivalently, a class of conformal field theories with current algebras. We study the moduli space of such models by using truly marginal deformations. Particular emphasis is placed on asymmetric deformations that, together with the CFT description, enjoy a very nice spacetime interpretation in terms of the underlying Lie algebra. Then we take a slight detour so to deal with off-shell systems. Using a renormalization-group approach we describe the relaxation towards the symmetrical equilibrium situation. In he final chapter we consider backgrounds with Ramond-Ramond field and in particular we analyze direct products of constant-curvature spaces and find solutions with hyperbolic spaces. [source]


A cryptic lysis gene near the start of the Q, replicase gene in the +1 frame

GENES TO CELLS, Issue 10 2004
Tohru Nishihara
The maturation/lysis (A2) protein encoded by the group B single-stranded RNA bacteriophage Q, mediates lysis of host Escherichia coli cells. We found a frameshift mutation in the replicase (,-subunit) gene of Q, cDNA causes cell lysis. The mutant has a single base deletion 73 nucleotides (nt) 3, from the start of the replicase gene with consequent translation termination at a stop codon 129,131 nt further 3,. The 43-amino acid C-terminal part of the 67-amino acid product encoded by what in WT (wild-type) is the +1 frame, is rich in basic amino acids This 67-aa protein can mediate cell lysis whose characteristics indicate that the protein may cause lysis by a different mechanism and via a different target, than that caused by the A2 maturation/lysis protein. Synthesis of a counterpart of the newly discovered lysis product in wild-type phage infection would require a hypothetical ribosomal frameshifting event. The lysis gene of group A RNA phages is also short, 75 codons in MS2, and partially overlaps the first part of their equivalently located replicase gene, raising significant evolutionary implications for the present finding. [source]


Expression of ribosome modulation factor (RMF) in Escherichia coli requires ppGpp

GENES TO CELLS, Issue 8 2001
Kaori Izutsu
Background During the transition from the logarithmic to the stationary phase, 70S ribosomes are dimerized into the 100S form, which has no translational activity. Ribosome Modulation Factor (RMF) is induced during the stationary phase and binds to the 50S ribosomal subunit, which directs the dimerization of 70S ribosomes. Unlike many other genes induced in the stationary phase, rmf transcription is independent of the sigma S. To identify the factors that regulate the growth phase-dependent induction of rmf, mutant strains deficient in global regulators were examined for lacZ expression directed by the rmf promoter. Results Among mutants of defective global regulators, only ppGpp deficiency (relA-spoT double mutant) drastically reduced the level of rmf transcription to less than 10% of that seen in the wild-type. Neither RMF nor 100S ribosomes were detected in this mutant in the stationary phase. rmf transcription correlated well with cellular ppGpp levels during amino acid starvation, IPTG induction of Ptrc-relA455 and in other mutants with artificially increased ppGpp levels. Although the growth rate also correlated inversely with both ppGpp levels and rmf transcription, the observation that the growth rates of the ppGpp-deficient and wild-type strains varied equivalently when grown on different media indicates that the link between rmf transcription and ppGpp levels is not a function of the growth rate. Conclusions ppGpp appears to positively regulate rmf transcription, at least in vivo. Thus, RMF provides a novel negative translational control by facilitating the formation of inactive ribosome dimers (100S) under the stringent circumstances of the stationary phase. [source]


Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Peiliang Xu
SUMMARY The method of generalized cross-validation (GCV) has been widely used to determine the regularization parameter, because the criterion minimizes the average predicted residuals of measured data and depends solely on data. The data-driven advantage is valid only if the variance,covariance matrix of the data can be represented as the product of a given positive definite matrix and a scalar unknown noise variance. In practice, important geophysical inverse ill-posed problems have often been solved by combining different types of data. The stochastic model of measurements in this case contains a number of different unknown variance components. Although the weighting factors, or equivalently the variance components, have been shown to significantly affect joint inversion results of geophysical ill-posed problems, they have been either assumed to be known or empirically chosen. No solid statistical foundation is available yet to correctly determine the weighting factors of different types of data in joint geophysical inversion. We extend the GCV method to accommodate both the regularization parameter and the variance components. The extended version of GCV essentially consists of two steps, one to estimate the variance components by fixing the regularization parameter and the other to determine the regularization parameter by using the GCV method and by fixing the variance components. We simulate two examples: a purely mathematical integral equation of the first kind modified from the first example of Phillips (1962) and a typical geophysical example of downward continuation to recover the gravity anomalies on the surface of the Earth from satellite measurements. Based on the two simulated examples, we extensively compare the iterative GCV method with existing methods, which have shown that the method works well to correctly recover the unknown variance components and determine the regularization parameter. In other words, our method lets data speak for themselves, decide the correct weighting factors of different types of geophysical data, and determine the regularization parameter. In addition, we derive an unbiased estimator of the noise variance by correcting the biases of the regularized residuals. A simplified formula to save the time of computation is also given. The two new estimators of the noise variance are compared with six existing methods through numerical simulations. The simulation results have shown that the two new estimators perform as well as Wahba's estimator for highly ill-posed problems and outperform any existing methods for moderately ill-posed problems. [source]


Energy Group optimization for forward and inverse problems in nuclear engineering: application to downwell-logging problems

GEOPHYSICAL PROSPECTING, Issue 2 2006
Elsa Aristodemou
ABSTRACT Simulating radiation transport of neutral particles (neutrons and ,-ray photons) within subsurface formations has been an area of research in the nuclear well-logging community since the 1960s, with many researchers exploiting existing computational tools already available within the nuclear reactor community. Deterministic codes became a popular tool, with the radiation transport equation being solved using a discretization of phase-space of the problem (energy, angle, space and time). The energy discretization in such codes is based on the multigroup approximation, or equivalently the discrete finite-difference energy approximation. One of the uncertainties, therefore, of simulating radiation transport problems, has become the multigroup energy structure. The nuclear reactor community has tackled the problem by optimizing existing nuclear cross-sectional libraries using a variety of group-collapsing codes, whilst the nuclear well-logging community has relied, until now, on libraries used in the nuclear reactor community. However, although the utilization of such libraries has been extremely useful in the past, it has also become clear that a larger number of energy groups were available than was necessary for the well-logging problems. It was obvious, therefore, that a multigroup energy structure specific to the needs of the nuclear well-logging community needed to be established. This would have the benefit of reducing computational time (the ultimate aim of this work) for both the stochastic and deterministic calculations since computational time increases with the number of energy groups. We, therefore, present in this study two methodologies that enable the optimization of any multigroup neutron,, energy structure. Although we test our theoretical approaches on nuclear well-logging synthetic data, the methodologies can be applied to other radiation transport problems that use the multigroup energy approximation. The first approach considers the effect of collapsing the neutron groups by solving the forward transport problem directly using the deterministic code EVENT, and obtaining neutron and ,-ray fluxes deterministically for the different group-collapsing options. The best collapsing option is chosen as the one which minimizes the effect on the ,-ray spectrum. During this methodology, parallel processing is implemented to reduce computational times. The second approach uses the uncollapsed output from neural network simulations in order to estimate the new, collapsed fluxes for the different collapsing cases. Subsequently, an inversion technique is used which calculates the properties of the subsurface, based on the collapsed fluxes. The best collapsing option is chosen as the one that predicts the subsurface properties with a minimal error. The fundamental difference between the two methodologies relates to their effect on the generated ,-rays. The first methodology takes the generation of ,-rays fully into account by solving the transport equation directly. The second methodology assumes that the reduction of the neutron groups has no effect on the ,-ray fluxes. It does, however, utilize an inversion scheme to predict the subsurface properties reliably, and it looks at the effect of collapsing the neutron groups on these predictions. Although the second procedure is favoured because of (a) the speed with which a solution can be obtained and (b) the application of an inversion scheme, its results need to be validated against a physically more stringent methodology. A comparison of the two methodologies is therefore given. [source]


A new traffic model for backbone networks and its application to performance analysis

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2008
Ming Yu
Abstract In this paper, we present a new traffic model constructed from a random number of shifting level processes (SLP) aggregated over time, in which the lengths of the active periods of the SLP are of Pareto or truncated Pareto distribution. For both cases, the model has been proved to be asymptotically second-order self-similar. However, based on extensive traffic data we collected from a backbone network, we find that the active periods of the constructing SLPs can be approximated better by a truncated Pareto distribution, instead of the Pareto distribution as assumed in existing traffic model constructions. The queueing problem of a single server fed with a traffic described by the model is equivalently converted to a problem with a traffic described by Norros' model. For the tail probability of the queue length distribution, an approximate expression and upper bound have been found in terms of large deviation estimates and are mathematically more tractable than existing results. The effectiveness of the traffic model and performance results are demonstrated by our simulations and experimental studies on a backbone network. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A quasi-planar incident wave excitation for time-domain scattering analysis of periodic structures

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 5 2006
David Degerfeldt
Abstract We present a quasi-planar incident wave excitation for time-domain scattering analysis of periodic structures. It uses a particular superposition of plane waves that yields an incident wave with the same periodicity as the periodic structure itself. The duration of the incident wave is controlled by means of its frequency spectrum or, equivalently, the angular spread in its constituting plane waves. Accuracy and convergence properties of the method are demonstrated by scattering computations for a planar dielectric half-space. Equipped with the proposed source, a time-domain solver based on linear elements yields an error of roughly 1% for a resolution of 20 points per wavelength and second-order convergence is achieved for smooth scatterers. Computations of the scattering characteristics for a sinusoidal surface and a random rough surface show similar performance. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Broken symmetry approach and chemical susceptibility of carbon nanotubes

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 8 2010
Elena F. Sheka
Abstract Constituting a part of odd electrons that are excluded from the covalent bonding, effectively unpaired electrons are posed by the singlet instability of the single-determinant broken spin-symmetry unrestricted Hartree,Fock (UBS HF) SCF solution. The correct determination of the total number of effectively unpaired electrons ND and its fraction on each NDÀ atom is well provided by the UBS HF solution. The NDÀ value is offered to be a quantifier of atomic chemical susceptibility (or equivalently, reactivity) thus highlighting targets that are the most favorable for addition reactions of any type. The approach is illustrated for two families involving fragments of arm-chair (n,n) and zigzag (m,0) single-walled nanotubes different by the length and end structure. Short and long tubes as well as tubes with capped end and open end, in the latter case, both hydrogen terminated and empty, are considered. Algorithms of the quantitative description of any length tubes are suggested. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2010 [source]


New perspectives on the fundamental theorem of density functional theory

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2008
Xiao-Yin Pan
Abstract The fundamental theorem of time-independent/time-dependent density functional theory due to Hohenberg,Kohn (HK)/Runge,Gross (RG) proves the bijectivity between the density ,(r)/,(rt) and the Hamiltonian /(t) to within a constant C/function C(t), and wave function ,/, (t). The theorems are each proved for scalar external potential energy operators. By a unitary or equivalently a gauge transformation that preserves the density, we generalize the realm of validity of each theorem to Hamiltonians, which additionally include the momentum operator and a curl-free vector potential energy operator defined in terms of a gauge function , (R)/, (Rt). The original HK/RG theorems then each constitute a special case of this generalization. Thereby, a fourfold hierarchy of such theorems is established. As a consequence of the generalization, the wave function ,/, (t) is shown to be a functional of both the density ,(r)/,(rt), which is a gauge-invariant property, and a gauge function ,(R)/,(Rt). The functional dependence on the gauge function ensures that as required by quantum mechanics, the wave function written as a functional is gauge variant. The hierarchy and the dependence of the wave function functional on the gauge function thus enhance the significance of the phase factor in density functional theory in a manner similar to that of quantum mechanics. Various additional perspectives on the theorem are arrived at. These understandings also address past critiques of time-dependent theory. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Quasilocal defects in regular planar networks: Categorization for molecular cones

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4-5 2003
D. J. Klein
Abstract Graphical networks are cast into structural equivalence classes, with special focus on ones related to two-dimensional regular translationally symmetric nets (or lattices). A quasilocal defect in a regular net is defined as consisting of a finite subnet surrounded outside this region by an infinitely extended network of which arbitrary, simply connected regions are isomorphic to those of the regular undefected net. The global equivalence classes for such quasilocal defects are identified by a "circum-matching" characteristic. One or more such classes are identified as a "turn" number, or equivalently as a discrete "combinatorial curvature" ,, which associates closely to the geometric Gaussian curvature of "physically reasonable" embeddings of the net in Euclidean space. Then for positive ,, geometric cones result; for , = 0, the network is flat overall; and for negative ,, fluted or crenalated cones result. As , or q varies through its discrete range, the number of defect classes varies between 1 and , and repeats with a period depending on the parent regular net. For the square-planar net, the numbers of defect classes at succeeding turn numbers (q) starting at q = 0 are ,, 2, 3, 2, repeating with a period of 4. For the hexagonal and triangular nets, the numbers of classes at suceeding q starting at q = 0 are ,, 1, 2, 2, 2, 1, repeating with a period of 6. A further refinement of the classes of quasilocal defects breaks these classes up into "irrotational" subclasses, as are relevant for multiwall cones. The subclasses are identified via a "quasispin" characteristic, which is conveniently manipulatable for the categorization of multiwall cones. Besides the development of the overall comprehensive topo-combinatoric categorization scheme for quasilocal defects, some consequences are briefly indicated, and combining rules for the characteristics of pairs of such defects are briefly considered. © 2003 Wiley Periodicals, Inc. Int J Quantum Chem, 2003 [source]


Necessary and sufficient conditions for solving consensus problems of double-integrator dynamics via sampled control

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 15 2010
Huiyang Liu
Abstract In this paper, consensus problems of double-integrator dynamics via sampled control are investigated. The sampled control protocol is induced from continuous-time linear consensus protocol by using periodic sampling technology and zero-order hold circuit. With the obtained sampled control protocol, the continuous-time multi-agent system is equivalently transformed into a linear discrete-time system. Necessary and sufficient conditions are given to guarantee that all the agents asymptotically travel with zero relative positions and common velocities. Furthermore, consensus problem with continuous-time consensus protocol is re-analyzed. A necessary and sufficient condition is also obtained which is consistent with the special case when the sampling period tends to zero. The effectiveness of these algorithms is demonstrated through simulations. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Robust quadratic performance for time-delayed uncertain linear systems

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2 2003
Fen WuArticle first published online: 20 DEC 200
Abstract In this paper, the analysis and control synthesis problems were studied for a general class of uncertain linear systems with variable time delay. It is assumed that the structured time-varying parametric uncertainties enter the system state-space description in a linear fractional fashion. The generic quadratic performance metric encompasses many types of dynamic system performance measure. In the context of delay-independent stability, it was shown that the analysis and state-feedback synthesis problems for such time-delayed uncertain systems can be formulated equivalently as linear matrix inequality (LMI) optimization problems using the mechanism of full block multipliers. However, the solvability condition to output-feedback problem was given as bilinear matrix inequality (BMI), which leads to a non-convex optimization problem. A numerical example is provided to demonstrate the advantages of newly proposed control synthesis condition for time-delayed uncertain systems over existing approaches. Copyright © 2002 John Wiley & Sons, Ltd. [source]


An intelligent control concept for formation flying satellites

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2-3 2002
S. R. Vadali
Abstract This paper deals with the determination of initial conditions and the design of fuel-balancing orbit control laws for a formation of satellites. Hill's equations describe the linearized dynamics of relative motion between two satellites. They admit bounded relative orbit solutions as special cases. Predictably, these bounded solutions break down in the presence of nonlinearities and perturbations. A method for determining the initial conditions that result in quasi-periodic relative orbits over the short term, in the presence of J2 perturbation, is presented. The control acceleration or equivalently, the fuel required to cancel the perturbation on a satellite depends upon its orbital inclination with respect to that of the reference satellite. An intelligent control concept that exploits the physics of the relative motion dynamics is presented. Analysis shows that this concept minimizes the total fuel consumption of the formation and maintains equal, average fuel consumption for each satellite. The concept is implemented using a novel, disturbance accommodating control design process. Numerical simulations and analytical results are in excellent agreement with each other. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Low-complexity unambiguous acquisition methods for BOC-modulated CDMA signals

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 6 2008
Elena Simona Lohan
Abstract The new M-code signals of GPS and the signals proposed for the future Galileo systems are of split-spectrum type, where the pseudorandom (PRN) code is multiplied with rectangular sub-carriers in one or several stages. Sine and cosine binary-offset-carrier (BOC) modulations are examples of modulations, which split the signal spectrum and create ambiguities in the envelope of the autocorrelation function (ACF) of the modulated signals. Thus, the acquisition of split-spectrum signals, based on the ambiguous ACF, poses some challenges, which might be overcome at the expense of higher complexity (e.g. by decreasing the step in searching the timing hypotheses). Recently, two techniques that deal with the ambiguities of the ACF have been proposed, and they were referred to as ,sideband (SB) techniques' (by Betz, Fishman et al.) or ,BPSK-like' techniques (by Martin, Heiries et al.), since they use SB correlation channels and the obtained ACF looks similar to the ACF of a BPSK-modulated PRN code. These techniques allow the use of a higher search step compared with the ambiguous ACF situation. However, both these techniques use SB-selection filters and modified reference PRN codes at the receivers, which affect the implementational complexity. Moreover, the ,BPSK-like' techniques have been so far studied for even BOC-modulation orders (i.e. integer ratio between the sub-carrier frequency and the chip rate) and they fail to work for odd BOC-modulation orders (or equivalently for split-spectrum signals with significant zero-frequency content). We propose here three reduced-complexity methods that remove the ambiguities of the ACF of the split-spectrum signals and work for both even and odd BOC-modulation orders. Two of the proposed methods are extensions of the previously mentioned techniques, and the third one is introduced by the authors and called the unsuppressed adjacent lobes (UAL) technique. We argue via theoretical analysis the choice of the parameters of the proposed methods and we compare the alternative methods in terms of complexity and performance. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Bayesian Hypothesis Testing: a Reference Approach

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2002
José M. Bernardo
Summary For any probability model M={p(x|,, ,), ,,,, ,,,} assumed to describe the probabilistic behaviour of data x,X, it is argued that testing whether or not the available data are compatible with the hypothesis H0={,=,0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|,0, ,), ,,,}, where the loss difference L(a0, ,, ,) ,L(a0, ,, ,) is proportional to the amount of information ,(,0, ,), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution ,(,, ,), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ,,,(,0, ,, ,),(,, ,|x)d,d, is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(,0, x) =,,,(,|x)d,, of the amount of information ,(,0, ,, ,) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d(,0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log-likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log-likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing. Résumé Pour un modèle probabiliste M={p(x|,, ,) ,,,, ,,,} censé décrire le comportement probabiliste de données x,X, nous soutenons que tester si les données sont compatibles avec une hypothèse H0={,=,0 doit être considéré comme un problème décisionnel concernant l'usage du modèle M0={p(x|,0, ,) ,,,}, avec une fonction de coût qui mesure la quantité d'information qui peut être perdue si le modèle simplifiéM0 est utilisé comme approximation du véritable modèle M. Le coût moyen, calculé par rapport à une loi a priori de référence idoine fournit une statistique de test pertinente, la statistique intrinsèque d(,0, x), invariante par reparamétrisation. La statistique intrinsèque d(,0, x) est mesurée en unités d'information, et sa calibrage, qui est independante de la taille de léchantillon et de la dimension du paramètre, ne dépend pas de sa distribution à l'échantillonage. La règle de Bayes correspondante, le critère de Bayes de référence (BRC), indique que H0 doit seulement êetre rejeté si le coût a posteriori moyen de la perte d'information à utiliser le modèle simplifiéM0 est trop grande. Le critère BRC fournit une solution bayésienne générale et objective pour les tests d'hypothèses précises qui ne réclame pas une masse de Dirac concentrée sur M0. Par conséquent, elle échappe au paradoxe de Lindley. Cette théorie est illustrée dans le contexte de variables normales multivariées, et on montre qu'elle évite le paradoxe de Rao sur l'inconsistence existant entre tests univariés et multivariés. [source]


A Study on Singaporeans' Perceptions of Sexual Harassment From a Cross-Cultural Perspective,

JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 4 2005
Shu Li
This paper addresses the question of whether culture and language in Singapore affect the interpretation of sexual harassment; that is, whether speakers from a different language and ethnic background will interpret the discourse domain of sexual harassment differently. Three studies constitute this research. The first study investigates whether certain cues relating to sexual harassment are judged equivalently across the ethnic groups. The second study examines how verbal space is conceptualized and ruled by the use of different languages used by different ethnic groups. The third study explores whether English, as a medium of communication, is a low-context language. Results show that different ethnic groups perceived the cues differently; that ethnicity affects the interpretation of a single English phrase; and that English as used by Singaporeans is a high-context language, which complicates the understanding of victims' coping responses. [source]


Effect of substrate size on immunoinhibition of amylase activity,

JOURNAL OF CLINICAL LABORATORY ANALYSIS, Issue 2 2001
Ilka Warshawsky
Abstract Immunoinhibition assays are hypothesized to work by antibodies blocking substrate access to enzyme active sites. To test this hypothesis, the inhibition of amylase isoenzymes by monoclonal and polyclonal antisera was assessed using substrates of varying sizes: chromogenic sustrates 3, 5, or 7 glucose units in length, novel synthetic macromolecular substrates, and starch. The synthetic macromolecular substrates consisted of small oligosaccharide substrates linked to an inert polymer that conferred a large size to substrate molecules as determined by gel filtration chromatography. When substrate size increased, amylase activity could be inhibited equivalently by antibody concentrations that are 10‐fold lower. Progressively less polyclonal serum was required to inhibit amylase activity as substrate length increased from 3 to 5 to 7 glucose units and as size was increased by linkage to a polymer. Different effects of substrate size were observed with two monoclonal antibodies. One monoclonal antibody blocked amylase activity independent of substrate size, while another monoclonal antibody had little inhibitory effect except using starch as substrate. We conclude that use of larger substrates can expand the repertoire of inhibitory epitopes on enzymes and convert a noninhibitory antibody into an inhibitory one. J. Clin. Lab. Anal. 15:64–70, 2001. [source]


Trading crossings for handles and crosscaps

JOURNAL OF GRAPH THEORY, Issue 4 2001
Dan Archdeacon
Abstract Let ck,=,crk (G) denote the minimum number of edge crossings when a graph G is drawn on an orientable surface of genus k. The (orientable) crossing sequence co, c1,c2,encodes the trade-off between adding handles and decreasing crossings. We focus on sequences of the type co,>,c1,>,c2,=,0; equivalently, we study the planar and toroidal crossing number of doubly-toroidal graphs. For every ,,>,0 we construct graphs whose orientable crossing sequence satisfies c1/co,>,5/6,,. In other words, we construct graphs where the addition of one handle can save roughly 1/6th of the crossings, but the addition of a second handle can save five times more crossings. We similarly define the non-orientable crossing sequence ,0,,1,,2, ··· for drawings on non-orientable surfaces. We show that for every ,0,>,,1,>,0 there exists a graph with non-orientable crossing sequence ,0, ,1, 0. We conjecture that every strictly-decreasing sequence of non-negative integers can be both an orientable crossing sequence and a non-orientable crossing sequence (with different graphs). © 2001 John Wiley & Sons, Inc. J Graph Theory 38: 230,243, 2001 [source]


Obstruction sets for outer-cylindrical graphs

JOURNAL OF GRAPH THEORY, Issue 1 2001
Dan Archdeacon
Abstract A graph is outer-cylindrical if it embeds in the sphere so that there are two distinct faces whose boundaries together contain all the vertices. The class of outer-cylindrical graphs is closed under minors. We give the complete set of 38 minor-minimal non-outer-cylindrical graphs, or equivalently, an excluded minor characterization of outer-cylindrical graphs. We also give the obstruction sets under the related topological ordering and Y,,-ordering. © 2001 John Wiley & Sons, Inc. J Graph Theory 38: 42,64, 2001 [source]


Why Has U.S. Inflation Become Harder to Forecast?

JOURNAL OF MONEY, CREDIT AND BANKING, Issue 2007
JAMES H. STOCK
Phillips curve; trend-cycle model; moving average; great moderation We examine whether the U.S. rate of price inflation has become harder to forecast and, to the extent that it has, what changes in the inflation process have made it so. The main finding is that the univariate inflation process is well described by an unobserved component trend-cycle model with stochastic volatility or, equivalently, an integrated moving average process with time-varying parameters. This model explains a variety of recent univariate inflation forecasting puzzles and begins to explain some multivariate inflation forecasting puzzles as well. [source]


Estrogen produced in cultured hippocampal neurons is a functional regulator of a GABAergic machinery

JOURNAL OF NEUROSCIENCE RESEARCH, Issue 8 2006
Takamitsu Ikeda
Abstract Accumulating evidence suggests that estrogen is produced locally by the neurons in the brain. We observed that a 48-hr treatment with the estrogen receptor antagonists ICI 182780 and tamoxifen decreased the level of glutamate decarboxylase (GAD)-65, a rate-limiting ,-aminobutyric acid (GABA)-synthesizing enzyme, in a dissociated hippocampal neuronal culture. Aromatase is an essential enzyme for estrogen biosynthesis. Treatment with an aromatase inhibitor decreased the GAD 65 level, indicating that estrogen biogenesis functions to maintain the level of this enzyme for GABAergic neurotransmission. Furthermore, insofar as the effect of ICI 182780 was observed equivalently in the presence of either brain-derived neurotrophic factor (BDNF) or BDNF-receptor inhibitor K252a, estrogen probably regulates GAD level independently of brain-derived neurotrophic factor (BDNF). Thus, estrogen produced by neurons is considered to be an intrinsic regulatory factor for neuronal networks that maintain GABAergic neurotransmission. © 2006 Wiley-Liss, Inc. [source]


Bayesian models for relative archaeological chronology building

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2000
Caitlin E. Buck
For many years, archaeologists have postulated that the numbers of various artefact types found within excavated features should give insight about their relative dates of deposition even when stratigraphic information is not present. A typical data set used in such studies can be reported as a cross-classification table (often called an abundance matrix or, equivalently, a contingency table) of excavated features against artefact types. Each entry of the table represents the number of a particular artefact type found in a particular archaeological feature. Methodologies for attempting to identify temporal sequences on the basis of such data are commonly referred to as seriation techniques. Several different procedures for seriation including both parametric and non-parametric statistics have been used in an attempt to reconstruct relative chronological orders on the basis of such contingency tables. We develop some possible model-based approaches that might be used to aid in relative, archaeological chronology building. We use the recently developed Markov chain Monte Carlo method based on Langevin diffusions to fit some of the models proposed. Predictive Bayesian model choice techniques are then employed to ascertain which of the models that we develop are most plausible. We analyse two data sets taken from the literature on archaeological seriation. [source]


Geography of morphological differentiation in Asellus aquaticus (Crustacea: Isopoda: Asellidae)

JOURNAL OF ZOOLOGICAL SYSTEMATICS AND EVOLUTIONARY RESEARCH, Issue 2 2009
S. Prevor
Abstract We implemented a detailed morphometry and multivariate statistics to establish a general, large-scale racial differentiation in Asellus aquaticus (L.) sensu Racovitza. We ascertained that in surface populations a set of 11 morphometric characters might equivalently be represented by the pleopod respiratory area size alone. The analyses resulted in a distinct distribution pattern, with the large respiratory area populations disposed mainly along the Dinaric karst between southern Slovenia and western Macedonia and surrounded by the medium respiratory area morph, spatially irregularly substituted by the small area morph. This pattern is in contradiction with the distribution pattern of molecularly defined clades (as shown by Verovnik et al. 2005). We could find no ecological, hydrographical or paleogeographical explanations for such distribution pattern either. The only hypothetical explanation would be a preservation of the large respiratory area as a plesiomorphic character in the comparatively sheltered karst habitats, while throughout the easier accessible parts of the species range it was replaced by the ,modern' smaller area size. While a diminution of the respiratory area functionally means an increased sclerotization , hardening of pleopod IV,V exopodites, endopodites of pleopods III,V remain less sclerotized, probably respiratory and osmoregulatory functional. Zusammenfassung Die globale Rassendifferenzierung von Asellus aquaticus (L.) sensu Racovitza wurde anhand eingehender Morphometrie und multivariater Statistik untersucht. Es stellte sich heraus, dass der gesamte Satz von 11 morphometrischen Merkmalen allein durch das Merkmal ,Flächengröße der Pleopoden-Respirationsfläche' ersetzt werden kann. Die Analysen ergaben ein deutliches Muster, in dem Populationen mit großen Respirationsflächen überwiegend im Dinarischen Karst zwischen Süd-Slowenien und West-Makedonien verbreitet sind, von Morphen mit mittelgroßen Respirationsflächen umgeben werden, welche wiederum räumlich zerstreut von Morphen mit kleinen Respirationsflächen ersetzt werden. Dieses Muster widerspricht der Verbreitung von molekular-systematisch ermittelten Gruppen (Verovnik et al. 2005). Wir konnten keine ökologische, hydrographische oder paläogeographische Erklärung dafür finden. Die einzige hypothetische Erklärung könnte eine Erhaltung der großen Respirationsflächen als eines plesiomorphen Merkmals in vergleichsweise isolierten Karstgebieten sein, während sie in leichter besiedelbaren Gebieten von den ,modernen' kleineren Respirationsflächen ersetzt wurden. Es muss betont werden, dass eine Verkleinerung der Respirations-Area mit der Sklerotisierung der Exopoditen an den Pleopoden IV-V verbunden ist, während die Endopoditen der Pleopoden III-V ihre geringe Sklerotisierung beibehalten und somit wahrscheinlich atmungs- und osmoregulatorisch aktiv bleiben. [source]


Eliminating spurious lipid sidebands in 1H MRS of breast lesions

MAGNETIC RESONANCE IN MEDICINE, Issue 2 2002
Patrick J. Bolan
Abstract Detecting metabolites in breast lesions by in vivo 1H MR spectroscopy can be difficult due to the abundance of mobile lipids in the breast which can produce spurious sidebands that interfere with the metabolite signals. Two-dimensional J -resolved spectroscopy has been demonstrated in the brain as a means to eliminate these artifacts from a large water signal; coherent sidebands are resolved at their natural frequencies, leaving the noncoupled metabolite resonances in the zero-frequency trace of the 2D spectrum. This work demonstrates that using the zero-frequency trace,or equivalently the average of spectra acquired with different echo times,can be used to separate noncoupled metabolite signals from the lipid-induced sidebands. This technique is demonstrated with simulations, phantom studies, and in several breast lesions. Compared to the conventional approach using a single echo time, echo time averaging provides increased sensitivity for the study of small and irregularly shaped lesions. Magn Reson Med 48:215,222, 2002. © 2002 Wiley-Liss, Inc. [source]