Home About us Contact | |||
Linear Transformation (linear + transformation)
Selected AbstractsA new space-vector transformation for four-conductor systemsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 3 2000A. Ferrero Linear transformations are often employed in the study of three-phase systems, since they allow for simplifying the equations describing the system behaviour. Among them, the Fortescue, Clarke and Park transformations are very widely employed. In particular, the last one leads to the formulation of the Space-Vector theory, that is currently employed in the fields of AC machine theory, power definitions and active filtering. The greatest limitation in the use of these transformations is that they are restricted to systems with only three conductors. However, four-conductor systems are often present in the common practice of the electric systems and the application of the Space- Vector theory to such systems is not as straightforward as in the case of the three-conductor systems: a zero-sequence quantity must be considered separately. This paper proposes a linear transformation that extends the properties of the Space- Vector theory to four-conductor systems, and includes the Park transformation as a special case. The mathematical derivation of this new transformation is reported, and its application is discussed by means of some examples. [source] The Substance of Sexual Difference: Change and Persistence in Representations of the Body in Eighteenth,Century EnglandGENDER & HISTORY, Issue 2 2002Karen HarveyArticle first published online: 16 DEC 200 The claims of Thomas Laqueur for a shift from a one,sex to a two,sex model of sexual difference are incorporated into many recent histories of gender in England between 1650 and 1850. Yet the Laqueurian narrative is not supported by discussions of the substance of sexual difference in eighteenth,century erotic books. This article argues that different models of sexual difference were not mutually exclusive and did not change in linear fashion, but that the themes of sameness and difference were strategically deployed in the same period. Thus, there was an enduring synchronic diversity which undermines claims for linear transformation. [source] A covariance-adaptive approach for regularized inversion in linear modelsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007Christopher Kotsakis SUMMARY The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov,Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of ,CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design. [source] Identification and adaptive control of some stochastic distributed parameter systemsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2001B. Pasik-Duncan Abstract An important class of controlled linear stochastic distributed parameter systems is that with boundary or point control. A survey of some existing adaptive control problems with their solutions for the boundary or the point control of a partially known linear stochastic distributed parameter systems is presented. The distributed parameter system is described by an analytic semigroup with cylindrical white noise and a control that occurs only on the boundary or at discrete points. The unknown parameters in the model appear affinely in both the infinitesimal generator of the semigroup and the linear transformation of the control. The noise in the system is a cylindrical white Gaussian noise. Strong consistency is verified for a family of least-squares estimates of the unknown parameters. For a quadratic cost functional of the state and the control, the certainty equivalence control is self-optimizing, that is the family of average costs converges to the optimal ergodic cost. Copyright © 2001 John Wiley & Sons, Ltd. [source] Normal form representation of control systemsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 5 2002Daizhan Cheng Abstract This paper is to investigate the normal form representation of control systems. First, as numerical tools we develop an algorithm for normal form expression and the matrix representation of the Lie derivative of a linear vector field over homogeneous vector fields. The concept of normal form is modified. Necessary and sufficient conditions for a linear transformation to maintain the Brunowsky canonical form are obtained. It is then shown that the shift term can always be linearized up to any degree. Based on this fact, linearization procedure is proposed and the related algorithms are presented. Least square linear approximations are proposed for non-linearizable systems. Finally, the method is applied to the ball and beam example. The efforts are focused on the numerical and computer realization of linearization process. Copyright © 2002 John Wiley & Sons, Ltd. [source] Scaling and Testing Multiplicative Combinations in the Expectancy,Value Model of AttitudesJOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 9 2008Icek Ajzen This article examines the multiplicative combination of belief strength by outcome evaluation in the expectancy,value model of attitudes. Because linear transformation of a belief strength measure results in a nonlinear transformation of its product with outcome evaluation, use of unipolar or bipolar scoring must be empirically justified. Also, the claim that the Belief × Evaluation product fails to explain significant variance in attitudes is found to be baseless. In regression analyses, the main effect of belief strength takes account of the outcome's valence, and the main effect of outcome evaluation incorporates the outcome's perceived likelihood. Simulated data showed that multiplication adds substantially to the prediction of attitudes only when belief and evaluation measures cover the full range of potential scores. [source] Recentering and Realigning the SAT Score Distributions: How and WhyJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2002Neil J. Dorans The process employed to produce the conversions that take scores from the original SAT scales to recentered scales, in which reference group scores are centered near the midpoint of the score-reporting range, is laid out. For the purposes of this article, SAT Verbal and SAT Mathematical scores were placed on recentered scales, which have reporting ranges of 920 to 980, means of 950, and standard deviations of 11. (The 920-to-980 scale is used in this article to highlight the distinction between it and the old 200-to-800 scale. In actuality, recentered scores were reported on a 200-to-800 scale.) Recentering was accomplished via a linear transformation of normally distributed scores that were obtained from a continuized, smoothed frequency distribution of original SAT scores that were originally on augmented two-digit scales (i.e., discrete scores rounded to either 0 or 5 in the third decimal place). These discrete scores were obtained for all students in the 1990 Reference Group using 35 different editions of the SAT spanning October 1988 to June 1990. The performance of this 1990 Reference Group on the original and recentered scales is described. The effects of recentering on scores of individuals and the 1990 Reference Group are also examined. Finally, recentering did not occur solely on the basis of its technical merit. Issues associated with converting recentering from a possibility into a reality are discussed. [source] Uncovering a Latent Multinomial: Analysis of Mark,Recapture Data with MisidentificationBIOMETRICS, Issue 1 2010William A. Link Summary Natural tags based on DNA fingerprints or natural features of animals are now becoming very widely used in wildlife population biology. However, classic capture,recapture models do not allow for misidentification of animals which is a potentially very serious problem with natural tags. Statistical analysis of misidentification processes is extremely difficult using traditional likelihood methods but is easily handled using Bayesian methods. We present a general framework for Bayesian analysis of categorical data arising from a latent multinomial distribution. Although our work is motivated by a specific model for misidentification in closed population capture,recapture analyses, with crucial assumptions which may not always be appropriate, the methods we develop extend naturally to a variety of other models with similar structure. Suppose that observed frequencies,f,are a known linear transformation,f=A,x,of a latent multinomial variable,x,with cell probability vector,,=,(,). Given that full conditional distributions,[, | x],can be sampled, implementation of Gibbs sampling requires only that we can sample from the full conditional distribution,[x | f, ,], which is made possible by knowledge of the null space of A,. We illustrate the approach using two data sets with individual misidentification, one simulated, the other summarizing recapture data for salamanders based on natural marks. [source] Optimality for the linear quadratic non-Gaussian problem via the asymmetric Kalman filterINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2004Rosario Romera Abstract In the linear non-Gaussian case, the classical solution of the linear quadratic Gaussian (LQG) control problem is known to provide the best solution in the class of linear transformations of the plant output if optimality refers to classical least-squares minimization criteria. In this paper, the adaptive linear quadratic control problem is solved with optimality based on asymmetric least-squares approach, which includes least-squares criteria as a special case. Our main result gives explicit solutions for this optimal quadratic control problem for partially observable dynamic linear systems with asymmetric observation errors. The main difficulty is to find the optimal state estimate. For this purpose, an asymmetric version of the Kalman filter based on asymmetric least-squares estimation is used. We illustrate the applicability of our approach with numerical results. Copyright © 2004 John Wiley & Sons, Ltd. [source] Powered partial least squares discriminant analysis,JOURNAL OF CHEMOMETRICS, Issue 1 2009Kristian Hovde Liland Abstract From the fundamental parts of PLS-DA, Fisher's canonical discriminant analysis (FCDA) and Powered PLS (PPLS), we develop the concept of powered PLS for classification problems (PPLS-DA). By taking advantage of a sequence of data reducing linear transformations (consistent with the computation of ordinary PLS-DA components), PPLS-DA computes each component from the transformed data by maximization of a parameterized Rayleigh quotient associated with FCDA. Models found by the powered PLS methodology can contribute to reveal the relevance of particular predictors and often requires fewer and simpler components than their ordinary PLS counterparts. From the possibility of imposing restrictions on the powers available for optimization we obtain an explorative approach to predictive modeling not available to the traditional PLS methods. Copyright © 2008 John Wiley & Sons, Ltd. [source] The Nut in Screw TheoryJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 8 2003Michael Griffis This study in projective geometry reveals that the principle of duality applies to the screw. Here, the screw is demonstrated to be an element of a projective three-dimensional space (P3), right alongside the line. Dual elements for the screw and line are also revealed (the nut and spline). Reciprocity is demonstrated for a pair of screws, and incidence is demonstrated for screw and its dual element. Reciprocity and incidence are invariant for projective transformations of P3, but only incidence is invariant for the more general linear transformations of screws. This latter transformation is analogous to a projective transformation of a projective five-dimensional space (P5), which is shown to induce a contact transformation of the original P3, where some points lying on a Kummer surface are directly mapped. © 2003 Wiley Periodicals, Inc. [source] Fluctuations in isometric muscle force can be described by one linear projection of low-frequency components of motor unit discharge ratesTHE JOURNAL OF PHYSIOLOGY, Issue 24 2009Francesco Negro The aim of the study was to investigate the relation between linear transformations of motor unit discharge rates and muscle force. Intramuscular (wire electrodes) and high-density surface EMG (13 × 5 electrode grid) were recorded from the abductor digiti minimi muscle of eight healthy men during 60 s contractions at 5%, 7.5% and 10% of the maximal force. Spike trains of a total of 222 motor units were identified from the EMG recordings with decomposition algorithms. Principal component analysis of the smoothed motor unit discharge rates indicated that one component (first common component, FCC) described 44.2 ± 7.5% of the total variability of the smoothed discharge rates when computed over the entire contraction interval and 64.3 ± 10.2% of the variability when computed over 5 s intervals. When the FCC was computed from four or more motor units per contraction, it correlated with the force produced by the muscle (62.7 ± 10.1%) by a greater degree (P < 0.001) than the smoothed discharge rates of individual motor units (41.4 ± 7.8%). The correlation between FCC and the force signal increased up to 71.8 ± 13.1% when the duration and the shape of the smoothing window for discharge rates were similar to the average motor unit twitch force. Moreover, the coefficients of variation (CoV) for the force and for the FCC signal were correlated in all subjects (R2 range = 0.14,0.56; P < 0.05) whereas the CoV for force was correlated to the interspike interval variability in only one subject (R2= 0.12; P < 0.05). Similar results were further obtained from measures on the tibialis anterior muscle of an additional eight subjects during contractions at forces up to 20% of the maximal force (e.g. FCC explained 59.8 ± 11.0% of variability of the smoothed discharge rates). In conclusion, one signal captures most of the underlying variability of the low-frequency components of motor unit discharge rates and explains large part of the fluctuations in the motor output during isometric contractions. [source] |