One Dimension (one + dimension)

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of One Dimension

  • only one dimension


  • Selected Abstracts


    Ethnic Labels and Ethnic Identity as Predictors of Drug Use among Middle School Students in the Southwest

    JOURNAL OF RESEARCH ON ADOLESCENCE, Issue 1 2001
    Flavio Francisco Marsiglia
    This article explores differences in the self-reported drug use and exposure to drugs of an ethnically diverse group of 408 seventh-grade students from a large city in the southwest. We contrast the explanatory power of ethnic labels (African American, non-Hispanic White, Mexican American, and mixed ethnicity) and two dimensions of ethnic identity in predicting drug use. One dimension focuses on perceived ethnically consistent behavior, speech, and looks, while the other gauges a sense of ethnic pride. Ethnic labels were found to be somewhat useful in identifying differences in drug use, but the two ethnic identity measures, by themselves, did not generally help to explain differences in drug use. In conjunction, however, ethnic labels and ethnic identity measures explained far more of the differences in drug use than either did alone. The findings indicate that the two dimensions of ethnic identity predict drug outcomes in opposite ways, and these relations are different for minority students and non-Hispanic White students. Generally, African American, Mexican American, and mixed-ethnicity students with a strong sense of ethnic pride reported less drug use and exposure, while ethnically proud White students reported more. Ethnic minority students who viewed their behavior, speech, and looks as consistent with their ethnic group reported more drug use and exposure, while their White counterparts reported less. These findings are discussed, and recommendations for future research are provided. [source]


    An integrated model for statistical and vision monitoring in manufacturing transitions

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2003
    Harriet Black Nembhard
    Abstract Manufacturing transitions have been increasing due to higher pressures for product variety. One dimension of this variety is color. A major quality control challenge is to regulate the color by capturing data on color in real-time during the operation and to use it to assess the opportunities for good parts. Control charting, when applied to a stable state process, is an effective monitoring tool to continuously check for process shifts or upsets. However, the presence of transition events can impede the normal performance of a traditional control chart. In this paper, we present an integrated model for statistical and vision monitoring using a tracking signal to determine the start of the transition and a confirmation signal to ensure that any process oscillation has concluded. We also developed an automated color analysis and forecasting system (ACAFS) that we can adjust and calibrate to implement this methodology in different production processes. We use a color transition process in plastic extrusion to illustrate a transition event and demonstrate our proposed methodology. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Does Plant Variety Protection Contribute to Crop Productivity?

    THE JOURNAL OF WORLD INTELLECTUAL PROPERTY, Issue 2 2009
    Lessons for Developing Countries from US Wheat Breeding
    The application of intellectual property rights (IP) in developing countries is and remains highly controversial, particularly as regards applications to food/agriculture, and pharmaceuticals, which have direct ramifications for large numbers of peoples. One dimension complicating a reasoned dialogue on the public benefits of IP, particularly when many developing countries are implementing the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) as mandated by membership in the World Trade Organization, is a dearth of information on their actual operation and effects. In this study, we address one particular aspect of the limited documentation on the effects of IP systems, the effect of plant variety protection (PVP) on the genetic productivity potential of varieties. Specifically, we examine wheat varieties in Washington State, United States, which are produced by both public and private sector breeders. Results from the study show that implementation of PVP attracted private investment in open pollinated crops such as wheat in the United States and provided greater numbers of varieties of these crops, which are high yielding from both the public and private sectors. These results may provide some insights for policy makers from developing countries on the effects of IP for plants as their TRIPS commitments are being implemented. [source]


    Learning How and Learning What: Effects of Tacit and Codified Knowledge on Performance Improvement Following Technology Adoption

    DECISION SCIENCES, Issue 2 2003
    Amy C. Edmondson
    ABSTRACT This paper examines effects of tacit and codified knowledge on performance improvement as organizations gain experience with a new technology. We draw from knowledge management and learning curve research to predict improvement rate heterogeneity across organizations. We first note that the same technology can present opportunities for improvement along more than one dimension, such as efficiency and breadth of use. We compare improvement for two dimensions: one in which the acquisition of codified knowledge leads to improvement and another in which improvement requires tacit knowledge. We hypothesize that improvement rates across organizations will be more heterogeneous for dimensions of performance that rely on tacit knowledge than for those that rely on codified knowledge (H1), and that group membership stability predicts improvement rates for dimensions relying on tacit knowledge (H2). We further hypothesize that when performance relies on codified knowledge, later adopters should improve more quickly than earlier adopters (H3). All three hypotheses are supported in a study of 15 hospitals learning to use a new surgical technology. Implications for theory and practice are discussed. [source]


    Neuropsychological effects of hyperbaric oxygen therapy in cerebral palsy

    DEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 7 2002
    Paule Hardy
    We conducted a double-blind placebo study to investigate the claim that hyperbaric oxygen treatment (HBO2) improves the cognitive status of children with cerebral palsy (CP). Of 111 children diagnosed with CP (aged 4 to 12 years), only 75 were suitable for neuropsychological testing, assessing attention, working memory, processing speed, and psychosocial functioning. The children received 40 sessions of HBO2 or sham treatment over a 2-month period. Children in the active treatment group were exposed for 1 hour to 100% oxygen at 1.75 atmospheres absolute (ATA), whereas those in the sham group received only air at 1.3 ATA. Children in both groups showed better self-control and significant improvements in auditory attention and visual working memory compared with the baseline. However, no statistical difference was found between the two treatments. Furthermore, the sham group improved significantly on eight dimensions of the Conners'Parent Rating Scale, whereas the active treatment group improved only on one dimension. Most of these positive changes persisted for 3 months. No improvements were observed in either group for verbal span, visual attention, or processing speed. [source]


    Value Maximisation, Stakeholder Theory, and the Corporate Objective Function

    EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2001
    Michael Jensen
    This paper examines the role of the corporate objective function in corporate productivity and efficiency, social welfare, and the accountability of managers and directors. I argue that since it is logically impossible to maximise in more than one dimension, purposeful behaviour requires a single valued objective function. Two hundred years of work in economics and finance implies that in the absence of externalities and monopoly (and when all goods are priced), social welfare is maximised when each firm in an economy maximises its total market value. Total value is not just the value of the equity but also includes the market values of all other financial claims including debt, preferred stock, and warrants. In sharp contrast stakeholder theory, argues that managers should make decisions so as to take account of the interests of all stakeholders in a firm (including not only financial claimants, but also employees, customers, communities, governmental officials and under some interpretations the environment, terrorists and blackmailers). Because the advocates of stakeholder theory refuse to specify how to make the necessary tradeoffs among these competing interests they leave managers with a theory that makes it impossible for them to make purposeful decisions. With no way to keep score, stakeholder theory makes managers unaccountable for their actions. It seems clear that such a theory can be attractive to the self interest of managers and directors. Creating value takes more than acceptance of value maximisation as the organisational objective. As a statement of corporate purpose or vision, value maximisation is not likely to tap into the energy and enthusiasm of employees and managers to create value. Seen in this light, change in long-term market value becomes the scorecard that managers, directors, and others use to assess success or failure of the organisation. The choice of value maximisation as the corporate scorecard must be complemented by a corporate vision, strategy and tactics that unite participants in the organisation in its struggle for dominance in its competitive arena. A firm cannot maximise value if it ignores the interest of its stakeholders. I offer a proposal to clarify what I believe is the proper relation between value maximisation and stakeholder theory. I call it enlightened value maximisation, and it is identical to what I call enlightened stakeholder theory. Enlightened value maximisation utilises much of the structure of stakeholder theory but accepts maximisation of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders. Managers, directors, strategists, and management scientists can benefit from enlightened stakeholder theory. Enlightened stakeholder theory specifies long-term value maximisation or value seeking as the firm's objective and therefore solves the problems that arise from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, the managerial equivalent of stakeholder theory. The same conclusions hold. Balanced Scorecard theory is flawed because it presents managers with a scorecard which gives no score,that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organisation or division (and it must be consistent with the organisation's strategy). Given this we then encourage managers to use measures of the drivers of performance to understand better how to maximise their score. And as long as their score is defined properly, (and for lower levels in the organisation it will generally not be value) this will enhance their contribution to the firm. [source]


    Analysing soil variation in two dimensions with the discrete wavelet transform

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 4 2004
    R. M. Lark
    Summary Complex spatial variation in soil can be analysed by wavelets into contributions at several scales or resolutions. The first applications were to data recorded at regular intervals in one dimension, i.e. on transects. The theory extends readily to two dimensions, but the application to small sets of gridded data such as one is likely to have from a soil survey requires special adaptation. This paper describes the extension of wavelet theory to two dimensions. The adaptation of the wavelet filters near the limits of a region that was successful in one dimension proved unsuitable in two dimensions. We therefore had to pad the data out symmetrically beyond the limits to minimize edge effects. With the above modifications and Daubechies's wavelet with two vanishing moments the analysis is applied to soil thickness, slope gradient, and direct solar beam radiation at the land surface recorded at 100-m intervals on a 60 × 101 square grid in south-west England. The analysis revealed contributions to the variance at several scales and for different directions and correlations between the variables that were not evident in maps of the original data. In particular, it showed how the thickness of the soil increasingly matches the geological structure with increasing dilation of the wavelet, this relationship being local to the strongly aligned outcrops. The analysis reveals a similar pattern in slope gradient, and a negative correlation with soil thickness, most clearly evident at the coarser scales. The solar beam radiation integrates slope gradient and azimuth, and the analysis emphasizes the relations with topography at the various spatial scales and reveals additional effects of aspect on soil thickness. [source]


    How Theories of Financial Intermediation and Corporate Risk-Management Influence Bank Risk-Taking Behavior

    FINANCIAL MARKETS, INSTITUTIONS & INSTRUMENTS, Issue 5 2001
    Michael S. Pagano
    This paper examines the rationales for risk-taking and risk-management behavior from both a corporate finance and a banking perspective. After combining the theoretical insights from the corporate finance and banking literatures related to hedging and risk-taking, the paper reviews empirical tests based on these theories to determine which of these theories are best supported by the data. Managerial incentives are the most consistently supported rationale for describing how banks manage risk. In particular, moderate/high levels of equity ownership reduce bank risk while positive amounts of stock option grants increase bank risk-taking behavior. The review of empirical tests in the banking literature also suggests that financial intermediaries coordinate different aspects of risk (e.g., credit and interest rate risk) in order to maintain a certain level of total risk. The empirical results indicate hedgeable risks such as interest rate risk represent only one dimension of the risk-management problem. This implies empirical tests of the theories of corporate risk-management need to consider individual sub-components of total risk and the bank's ability to trade these risks in a competitive financial market. This finding is consistent with the reality that banks have non-zero expected financial distress costs and bank managers cannot fully diversify their bank-related personal investments. [source]


    Mechanisms Controlling Crystal Habits of Gold and Silver Colloids

    ADVANCED FUNCTIONAL MATERIALS, Issue 7 2005
    C. Lofton
    Abstract Examples of gold and silver anisotropic colloids, such as prisms and rods, have appeared in the literature for many years. In most cases, the morphologies of these thermodynamically unfavorable particles have been explained by the particular reaction environment in which they were synthesized. The mechanisms used to explain the growth generally fall into two categories, one in which chemically adsorbed molecules regulate the growth of specific crystal faces kinetically, and the other where micelle-forming surfactants physically direct the shape of the particle. This paper raises questions about the growth of anisotropic metal colloids that the current mechanisms cannot adequately address, specifically, the formation of multiple shapes in a single homogeneous reaction and the appearance of similar structures in very different synthesis schemes. These observations suggest that any growth mechanism should primarily take into consideration nucleation and kinetics, and not only thermodynamics or physical constrictions. The authors suggest an alternative mechanism where the presence and orientation of twin planes in these face-centered cubic (fcc) metals direct the shape of the growing particles. This explanation follows that used for silver halide crystals, and has the advantage of explaining particle growth in many synthesis methods. In this mechanism, twin planes generate reentrant grooves, favorable sites for the attachment of adatoms. Shape and structural data are presented for gold and silver particles synthesized using several different techniques to support this new model. Triangular prisms are suggested to contain a single twin plane which directs that growth of the initial seed in two dimensions, but limits the final size of the prism. Hexagonal platelets are suggested to contain two parallel twin planes that allow the fast growing edges to regenerate one another, allowing large sizes and aspect ratios to form. Rods and wires were found to have a fivefold symmetry, which may only allow growth in one dimension. It is expected that a superior mechanistic understanding will permit shape-selective synthesis schemes to be developed. [source]


    Quasi optimal finite difference method for Helmholtz problem on unstructured grids

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2010
    Daniel T. Fernandes
    Abstract A quasi optimal finite difference method (QOFD) is proposed for the Helmholtz problem. The stencils' coefficients are obtained numerically by minimizing a least-squares functional of the local truncation error for plane wave solutions in any direction. In one dimension this approach leads to a nodally exact scheme, with no truncation error, for uniform or non-uniform meshes. In two dimensions, when applied to a uniform cartesian grid, a 9-point sixth-order scheme is derived with the same truncation error of the quasi-stabilized finite element method (QSFEM) introduced by Babu,ka et al. (Comp. Meth. Appl. Mech. Eng. 1995; 128:325,359). Similarly, a 27-point sixth-order stencil is derived in three dimensions. The QOFD formulation, proposed here, is naturally applied on uniform, non-uniform and unstructured meshes in any dimension. Numerical results are presented showing optimal rates of convergence and reduced pollution effects for large values of the wave number. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Multidimensional FEM-FCT schemes for arbitrary time stepping

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2003
    D. Kuzmin
    Abstract The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    High-resolution, monotone solution of the adjoint shallow-water equations

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2002
    Brett F. Sanders
    Abstract A monotone, second-order accurate numerical scheme is presented for solving the differential form of the adjoint shallow-water equations in generalized two-dimensional coordinates. Fluctuation-splitting is utilized to achieve a high-resolution solution of the equations in primitive form. One-step and two-step schemes are presented and shown to achieve solutions of similarly high accuracy in one dimension. However, the two-step method is shown to yield more accurate solutions to problems in which unsteady wave speeds are present. In two dimensions, the two-step scheme is tested in the context of two parameter identification problems, and it is shown to accurately transmit the information needed to identify unknown forcing parameters based on measurements of the system response. The first problem involves the identification of an upstream flood hydrograph based on downstream depth measurements. The second problem involves the identification of a long wave state in the far-field based on near-field depth measurements. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Designer polynomials, discrete variable representations, and the Schrödinger equation

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4-5 2002
    Charles A. Weatherford
    Abstract The general procedure for constructing a set of orthonormal polynomials is given for an arbitrary positive definite weight function, w(x), in the interval [a, b]. The Lanczos method is used to generate the three-term recursion relation, which is then used to produce the polynomial coefficients. A discrete variable representation (DVR) is constructed from Gaussian nodes and weights that result from the three-term recursion relation. These are termed "designer polynomials" and the associated "designer DVRs." It will be shown by construction that every such set of "synthetic polynomials" carries an associated DVR. The term "designer" derives from the fact that the interval [a, b] and the weight function w(x) are arbitrary (except that w(x) must be positive definite on [a, b] and must have continuous derivatives except at a finite number of isolated discontinuities) and may be adapted to the physical problem of interest. The difficulties of applying a DVR to a "bare" Coulomb problem will be illustrated on a "toy" model in one dimension (1-D hydrogen atom). A solution for the 1-D Coulomb problem will be given, thereby motivating the need for designer DVRs. In doing so, a new set of polynomials is defined with a weight function w(x) = |x|kexp(,,|x|), (such that k = ,1, 0, +1, +2, ,) between the symmetrical limits [,,, +,]. These are called "synthetic Cartesian exponential polynomials (SCEP)." These polynomials are then used in a spectral and pseudospectral (DVR) representation to solve the 1-D hydrogen atom problem. © 2002 Wiley Periodicals, Inc. Int J Quantum Chem, 2002 [source]


    Ranking projects for an electricity utility using ELECTRE III

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 4 2007
    John Buchanan
    Abstract Ranking and selecting projects is a common yet often difficult task with typically more than one dimension for measuring project impacts and more than one decision maker. We describe a project selection methodology developed and used since 1998 for Mighty River Power, a New Zealand electricity generator, which incorporates the ELECTRE III decision support tool. Although several other multiple criteria approaches could have been used, features of ELECTRE III such as outranking, and indifference and preference thresholds were well received by our decision makers. More than the use of a specific decision support tool, we focus particularly on the successful implementation of a simple, structured multicriteria methodology for a yearly project selection exercise and document this over 8 years in a changing managerial context. [source]


    Analysis of scattering from polydisperse structure using Mellin convolution

    JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2006
    Norbert Stribeck
    This study extends a mathematical concept for the description of heterogeneity and polydispersity in the structure of materials to multiple dimensions. In one dimension, the description of heterogeneity by means of Mellin convolution is well known. In several papers by the author, the method has been applied to the analysis of data from materials with one-dimensional structure (layer stacks or fibrils along their principal axis). According to this concept, heterogeneous structures built from polydisperse ensembles of structural units are advantageously described by the Mellin convolution of a representative template structure with the size distribution of the templates. Hence, the polydisperse ensemble of similar structural units is generated by superposition of dilated templates. This approach is particularly attractive considering the advantageous mathematical properties enjoyed by the Mellin convolution. Thus, average particle size, and width and skewness of the particle size distribution can be determined from scattering data without the need to model the size distributions themselves. The present theoretical treatment demonstrates that the concept is generally extensible to dilation in multiple dimensions. Moreover, in an analogous manner, a representative cluster of correlated particles (e.g. layer stacks or microfibrils) can be considered as a template on a higher level. Polydispersity of such clusters is, again, described by subjecting the template structure to the generalized Mellin convolution. The proposed theory leads to a simple pathway for the quantitative determination of polydispersity and heterogeneity parameters. Consistency with the established theoretical approach of polydispersity in scattering theory is demonstrated. The method is applied to the best advantage in the field of soft condensed matter when anisotropic nanostructured materials are to be characterized by means of small-angle scattering (SAXS, USAXS, SANS). [source]


    Information theoretical measures to analyze trajectories in rational molecular design

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 16 2007
    K. Hamacher
    Abstract We develop a new methodology to analyze molecular dynamics trajectories and other time series data from simulation runs. This methodology is based on an information measure of the difference between distributions of various data extract from such simulations. The method is fast as it only involves the numerical integration/summation of the distributions in one dimension while avoiding sampling issues at the same time. The method is most suitable for applications in which different scenarios are to be compared, e.g. to guide rational molecular design. We show the power of the proposed method in an application of rational drug design by reduced model computations on the BH3 motif in the apoptosis inducing BCL2 protein family. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2007 [source]


    The World Development Report: concepts, content and a Chapter 12,

    JOURNAL OF INTERNATIONAL DEVELOPMENT, Issue 3 2001
    Robert Chambers
    The World Development Report (WDR) process set new standards for openness and consultation. Its concepts and content are a major advance on its 1990 predecessor. The intention that its concepts and content should be influenced by voices of the poor was partly fulfilled. Conceptually, the VOP findings support the multidimensional view of poverty as ,pronounced deprivation of wellbeing', and the use of income-poverty to describe what is only one dimension of poverty (though this welcome usage is not consistent throughout in the WDR). Two concepts or analytical orientations were not adopted: powerlessness and disadvantage seen as a multidimensional interlinked web; and livelihoods. On content, three areas where the influence fell short were: how the police persecute and impoverish poor people; the diversity of the poorest people; and the significance of the body as the main but vulnerable and indivisible asset of many poor people. A weakness of the WDR is its lack of critical self-awareness. Chapter 11 is self-serving for the International Financial Institutions: it lumps loans with grants as concessional finance; it makes liberal use of the term donor, but never lender; and it does not consider debt avoidance as a strategy. The Report ends abruptly, a body without a head. Its multidimensional view of poverty is not matched by a multidimensional view of power and responsibility. A Chapter 12 is crying out to be written. This would confront issues of professional, institutional and personal commitment and change. It would stress critical reflection as a professional norm, disempowerment for democratic diversity as institutional practice, and personal values, attitudes and courageous behaviour as primary and crucial if development is to be change that is good for poor people. A new conclusion is suggested for the WDR, and a title for the World Development Report 2010. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Mixing of two binary nonequilibrium phases in one dimension

    AICHE JOURNAL, Issue 8 2009
    Kjetil B. Haugen
    Abstract The mixing of nonequilibrium phases has important applications in improved oil recovery and geological CO2 -storage. The rate of mixing is often controlled by diffusion and modeling requires diffusion coefficients at subsurface temperature and pressure. High-pressure diffusion coefficients are commonly inferred from changes in bulk properties as two phases equilibrate in a PVT cell. However, models relating measured quantities to diffusion coefficients usually ignore convective mass transport. This work presents a comprehensive model of mixing of two nonequilibrium binary phases in one-dimension. Mass transport due to bulk velocity triggered by compressibility and nonideality is taken into account. Ignoring this phenomenon violates local mass balance and does not allow for changes in phase volumes. Simulations of two PVT cell experiments show that models ignoring bulk velocity may significantly overestimate the diffusion coefficients. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


    Utility transversality: a value-based approach

    JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2005
    James E. Matheson
    Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Conceptual boundaries and distances: Students' and experts' concepts of the scale of scientific phenomena

    JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 3 2006
    Thomas R. Tretter
    To reduce curricular fragmentation in science education, reform recommendations include using common, unifying themes such as scaling to enhance curricular coherence. This study involved 215 participants from five groups (grades 5, 7, 9, and 12, and doctoral students), who completed written assessments and card sort tasks related to their conceptions of size and scale, and then completed individual interviews. Results triangulated from the data sources revealed the boundaries between and characteristics of scale size ranges that are well distinguished from each other for each group. Results indicate that relative size information was more readily understood than exact size, and significant size landmarks were used to anchor this relational web of scales. The nature of past experiences situated along two dimensions,from visual to kinesthetic in one dimension, and wholistic to sequential in the other,were shown to be key to scale cognition development. Commonalities and differences between the groups are highlighted and discussed. © 2006 Wiley Periodicals, Inc. J Res Sci Teach 43: 282,319, 2006 [source]


    Comprehensive 2-D chromatography of random and block methacrylate copolymers

    JOURNAL OF SEPARATION SCIENCE, JSS, Issue 10 2010
    Monique van Hulst
    Abstract A comprehensive 2-D separation method was developed for the characterization of methacrylate copolymers. In both dimensions conditions were employed that give a critical separation for the homopolymer of one of the monomers in the copolymer, and exclusion behaviour for the other. The 2-D separation was realized by using a normal-phase column in one dimension and a reversed phase column in the other, and by precisely tuning the compositions of the two mobile phases employed. In the normal-phase dimension mixtures of THF and n -hexane or n -heptane were used as mobile phase, and in the reversed-phase dimension mixtures of ACN and THF. Moreover, stationary phase particles had to be selected for both columns that gave an exclusion window appropriate for the molecular size of the sample polymers to be characterized. The 2-D critical chromatography principle was tested with a polystyrene (PS)-polymethylmethacrylate (PMMA) block copolymer and with block and random polybutylmethacrylate (PBMA)-PMMA copolymers. Ideally, the retention time for a copolymer in both dimensions of this system would depend on the size of only one of the blocks, or on the contribution of only one of the monomers to the size of a random copolymer. However, it was found that the elution of the PS-PMMA block copolymer depended on the size of both blocks, even when the corresponding homopolymer of one of the monomers showed critical elution behaviour. Therefore, the method could not be calibrated for block sizes by using homopolymer standards alone. Still, it was shown that the method can be used to determine differences between samples (PS-PMMA and PBMA-PMMA) with respect to total molecular size or block sizes separately, or to average size and chemical composition for random copolymers. Block and random PBMA-PMMA copolymers showed a distinctly different pattern in the 2-D plots obtained with 2-D critical chromatography. This difference was shown to be related to the different procedures followed in the polymerization process, and the different molecular distributions resulting from these. [source]


    Confocal full-field X-ray microscope for novel three-dimensional X-ray imaging

    JOURNAL OF SYNCHROTRON RADIATION, Issue 5 2009
    Akihisa Takeuchi
    A confocal full-field X-ray microscope has been developed for use as a novel three-dimensional X-ray imaging method. The system consists of an X-ray illuminating `sheet-beam' whose beam shape is micrified only in one dimension, and an X-ray full-field microscope whose optical axis is normal to the illuminating sheet beam. An arbitral cross-sectional region of the object is irradiated by the sheet-beam, and secondary X-ray emission such as fluorescent X-rays from this region is imaged simultaneously using the full-field microscope. This system enables a virtual sliced image of a specimen to be obtained as a two-dimensional magnified image, and three-dimensional observation is available only by a linear translation of the object along the optical axis of the full-field microscope. A feasibility test has been carried out at beamline 37XU of SPring-8. Observation of the three-dimensional distribution of metallic inclusions in an artificial diamond was performed. [source]


    Validation of a composite score for clinical severity of hemophilia

    JOURNAL OF THROMBOSIS AND HAEMOSTASIS, Issue 7 2008
    S. SCHULMAN
    Summary.,Introduction:,Evaluation of modulators of the phenotypic expression of hemophilia may benefit from a scoring system that reflects several aspects of the clinical severity instead of only one dimension. Methods:,We describe here how we constructed a composite Hemophilia Severity Score (HSS) and performed validation. The items in the HSS are annual incidence of joint bleeds, World Federation of Hemophilia Orthopedic joint score, and annual factor consumption. The latter two were adjusted for age at start of prophylaxis and body weight. Data for 100 adolescent or adult patients with hemophilia A or B in the mild, moderate or severe form without inhibitors were collected for the 1990,1999 period. We evaluated the reliability (multidimension property, test,retest) and validity (content, convergent, discriminant and known groups) of the score. Results:,The HSS ranged from 0 to 0.94 and was higher in severe hemophilia A than severe hemophilia B (median 0.50 and 0.24; P = 0.031). The validation indicated that the HSS is reliable and reflective of the clinical severity of hemophilia. The presence of factor V G1691A or prothrombin G20210A polymorphisms was found in 13 patients. The clinical severity, measured as the HSS or each of the three components, appeared to be modified by prothrombin G20210A but not by FV G1691A. Conclusion:,The HSS is a well-defined tool that provides a comprehensive representation of the clinical severity of hemophilia in adults. It would be useful in larger studies on the assessment of modulators of the phenotypic expression of hemophilia. [source]


    A kinetic scheme for the Savage,Hutter equations

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 16 2008
    Christine Kaland
    Abstract The Savage,Hutter (SH) equations describe the motion of granular material under the influence of friction. Based on the kinetic formulation of the SH equations, we present a kinetic scheme in one dimension, which describes the deformation of the mass profile and allows it to start and to stop. Moreover the method is able to preserve the steady states of granular masses at rest. The method is tested on several numerical examples. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Local existence for the one-dimensional Vlasov,Poisson system with infinite mass

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 5 2007
    Stephen Pankavich
    Abstract A collisionless plasma is modelled by the Vlasov,Poisson system in one dimension. We consider the situation in which mobile negative ions balance a fixed background of positive charge, which is independent of space and time, as ,x, , ,. Thus, the total positive charge and the total negative charge are both infinite. Smooth solutions with appropriate asymptotic behaviour are shown to exist locally in time, and criteria for the continuation of these solutions are established. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Asymptotic behaviour for a non-monotone fluid in one dimension: the positive temperature case

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 8 2001
    B. Ducomet
    We consider a one-dimensional continuous model of neutron star, described by a compressible Navier,Stokes system with a non-monotone equation of state, due to the effective Skyrme nuclear interaction between particles. We study the asymptotic behaviour of globally defined solutions of a mixed free boundary problem for our model, for large time, assuming that a sufficient thermal dissipation is present. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Gene position in a long operon governs motility development in Bacillus subtilis

    MOLECULAR MICROBIOLOGY, Issue 2 2010
    Loralyn M. Cozy
    Summary Growing cultures of Bacillus subtilis bifurcate into subpopulations of motile individuals and non-motile chains of cells that are differentiated at the level of gene expression. The motile cells are ON and the chaining cells are OFF for transcription that depends on RNA polymerase and the alternative sigma factor ,D. Here we show that chaining cells were OFF for ,D -dependent gene expression because ,D levels fell below a threshold and ,D activity was inhibited by the anti-sigma factor FlgM. The probability that ,D exceeded the threshold was governed by the position of the sigD gene. The proportion of ON cells increased when sigD was artificially moved forward in the 27 kb fla/che operon. In addition, we identified a new ,D -dependent promoter that increases sigD expression and may provide positive feedback to stabilize the ON state. Finally, we demonstrate that ON/OFF motility states in B. subtilis are a form of development because mosaics of stable and differentiated epigenotypes were evident when the normally dispersed bacteria were forced to grow in one dimension. [source]


    Spawning and merging of Fourier modes and phase coupling in the cosmological density bispectrum

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2004
    Lung-Yih Chiang
    ABSTRACT In the standard picture of cosmological structure formation, initially random-phase fluctuations are amplified by non-linear gravitational instability to produce a final distribution of mass that is highly non-Gaussian and has highly coupled Fourier phases. We use the Zel'dovich approximation in one dimension to elucidate the onset of non-linearity, including mode spawning, merging and coupling. We show that, as gravitational clustering proceeds, Fourier modes are spawned from parent ones, with their phases following a harmonic relationship with the wavenumbers. Spawned modes could also merge, leading to modulation of the amplitudes and phases, which consequently breaks such a harmonic relation. We also use simple toy models to demonstrate that the bispectrum, the Fourier transform of connected three-point correlation functions, measures phase coupling at most at second order only when the special wavenumber,phase harmonic relation holds. Phase information is therefore partly registered in the bispectrum, and it takes a complete hierarchy of polyspectra to characterize fully gravitational non-linearity. [source]


    Relating the Central and the Local

    NONPROFIT MANAGEMENT & LEADERSHIP, Issue 4 2000
    Marilyn Taylor
    Although a number of valuable models of central-local relationships in the nonprofit sector have been developed, particularly in relation to federal structures, there has been a tendency to assume that in any given organizational relationship central-local structures will follow one common pattern. We argue that wider strategies are available: central dependency along one dimension may run with greater local autonomy along another. Such mixed tight-loose structures may be of considerable importance in the "boundaryless" organizational environment of the future. [source]


    A mathematical and statistical framework for modelling dispersal

    OIKOS, Issue 6 2007
    Tord Snäll
    Mechanistic and phenomenological dispersal modelling of organisms has long been an area of intensive research. Recently, there has been an increased interest in intermediate models between the two. Intermediate models include major mechanisms that affect dispersal, in addition to the dispersal curve of a phenomenological model. Here we review and describe the mathematical and statistical framework for phenomenological dispersal modelling. In the mathematical development we describe modelling of dispersal in two dimensions from a point source, and in one dimension from a line or area source. In the statistical development we describe applicable observation distributions, and the procedures of model fitting, comparison, checking, and prediction. The procedures are also demonstrated using data from dispersal experiments. The data are hierarchically structured, and hence, we fit hierarchical models. The Bayesian modelling approach is applied, which allows us to show the uncertainty in the parameter estimates and in predictions. Finally, we show how to account for the effect of wind speed on the estimates of the dispersal parameters. This serves as an example of how to strengthen the coupling in the modelling between the phenomenon observed in an experiment and the underlying process , something that should be striven for in the statistical modelling of dispersal. [source]