Measurement Process (measurement + process)

Distribution by Scientific Domains


Selected Abstracts


Implementing Evaluation of the Measurement Process in an Automotive Manufacturer: a Case Study

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2003
Graeme Knowles
Abstract Reducing process variability is presently an area of much interest in manufacturing organizations. Programmes such as Six Sigma robustly link the financial performance of the organization to the degree of variability present in the processes and products of the organization. Data, and hence measurement processes, play an important part in driving such programmes and in making key manufacturing decisions. In many organizations, however, little thought is given to the quality of the data generated by such measurement processes. By using potentially flawed data in making fundamental manufacturing decisions, the quality of the decision-making process is undermined and, potentially, significant costs are incurred. Research in this area is sparse and has concentrated on the technicalities of the methodologies available to assess measurement process capability. Little work has been done on how to operationalize such activities to give maximum benefit. From the perspective of one automotive company, this paper briefly reviews the approaches presently available to assess the quality of data and develops a practical approach, which is based on an existing technical methodology and incorporates simple continuous improvement tools within a framework which facilitates appropriate improvement actions for each process assessed. A case study demonstrates the framework and shows it to be sound, generalizable and highly supportive of continuous improvement goals. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Standard-Setting Methods as Measurement Processes

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 1 2010
Paul Nichols
Some writers in the measurement literature have been skeptical of the meaningfulness of achievement standards and described the standard-setting process as blatantly arbitrary. We argue that standard setting is more appropriately conceived of as a measurement process similar to student assessment. The construct being measured is the panelists' representation of student performance at the threshold of an achievement level. In the first section of this paper, we argue that standard setting is an example of stimulus-centered measurement. In the second section, we elaborate on this idea by comparing some popular standard-setting methods to the stimulus-centered scaling methods known as psychophysical scaling. In the third section, we use the lens of standard setting as a measurement process to take a fresh look at the two criticisms of standard setting: the role of judgment and the variability of results. In the fourth section, we offer a vision of standard-setting research and practice as grounded in the theory and practice of educational measurement. [source]


Assessing sources of variability in measurement of ambient particulate matter

ENVIRONMETRICS, Issue 6 2001
Michael J. Daniels
Abstract Particulate matter (PM), a component of ambient air pollution, has been the subject of United States Environmental Protection Agency regulation in part due to many epidemiological studies examining its connection with health. Better understanding the PM measurement process and its dependence on location, time, and other factors is important for both modifying regulations and better understanding its effects on health. In light of this, in this paper, we will explore sources of variability in measuring PM including spatial, temporal and meteorological effects. In addition, we will assess the degree to which there is heterogeneity in the variability of the micro-scale processes, which may suggest important unmeasured processes, and the degree to which there is unexplained heterogeneity in space and time. We use Bayesian hierarchical models and restrict attention to the greater Pittsburgh (USA) area in 1996. The analyses indicated no spatial dependence after accounting for other sources of variability and also indicated heterogeneity in the variability of the micro-scale processes over time and space. Weather and temporal effects were very important and there was substantial heterogeneity in these effects across sites. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Influence of moisture content on measurement accuracy of porous media thermal conductivity

HEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 8 2009
Mingzhi Yu
Abstract The thermal conductivity measurement accuracy of sand was experimentally studied with a hot disk thermal constant analyzer and water morphologies, distribution, and evolution at the pore scale were observed with a charge coupled device (CCD) combined with a microscope. It was found that thermal conductivities of samples with low moisture content (<25%) could not be accurately measured. For samples with low moisture content, the analysis showed that the water in the region adjacent to the analyzer sensor mainly existed as isolated liquid bridges between/among sand particles and would evaporate and diffuse to relatively far regions because of being heated by the sensor during measurement. Water evaporation and diffusion caused the sample constitution in the region adjacent to the sensor to vary throughout the whole measurement process, and accordingly induced low accuracy of the obtained thermal conductivities. Due to high water connectivity in pores, the rate of water evaporation and diffusion in porous media of high moisture content was relatively slow when compared with that of low moisture content. Meanwhile, water in the relatively far regions flowed back to the region adjacent to the sensor by capillary force. Therefore, samples consisting of the region adjacent to the sensor maintained the constant and thermal conductivities of porous media with relatively high moisture content and could be measured with high accuracy. © 2009 Wiley Periodicals, Inc. Heat Trans Asian Res; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.20272 [source]


The effect of viscosity on surface tension measurements by the drop weight method

JOURNAL OF APPLIED POLYMER SCIENCE, Issue 3 2007
T. Kaully
Abstract Viscosity is one of the parameters affecting the measured surface tension, as fluid mechanics affects the measurement process using conventional methods. Several methods including the selected planes (SPM) and WDSM which combines the weight drop method (WDM) and SPM, are applied to surface tension measurement of high viscous liquids. Yet, none of them treats the viscosity effect separately. The current publication presents a simple, easy to apply empirical approach of satisfactory accuracy, for evaluation of surface tension of liquids having wide range of viscosities up to 10 Pa s. The proposed method is based on Tate's law and the "drop weight" method using calibration curves of known liquids having similar surface tensions but different viscosities. Drop weight of liquids having viscosity ,0.05 Pa s, was found to be significantly affected by the liquid viscosity. The shape factor, f, of high viscosity liquids was found to correlate linearly with the logarithm of viscosity, pointing the importance of viscosity correction. The experimental correlation presented in the current work can be used as a tool for the evaluation of surface tension for high viscosity liquids such as prepolymers. © 2007 Wiley Periodicals, Inc. J Appl Polym Sci, 2007 [source]


Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2010
M. Hafidz Omar
Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation charts are introduced as techniques for ensuring quality in a measurement process for rating performance items in operational assessments. Several strengths and weaknesses of the procedures are explored with illustrative real and simulated rating data. Further research directions are also suggested. [source]


Prism coupling characterization of planar optical waveguides made by silver ion exchange in glass

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 10 2005
O. Hidalgo
Abstract A modified dark-lines method of prism-coupling technique is utilized for the experimental determination of the effective index of propagating modes in a glass planar waveguide. We use to make the waveguides a silver-sodium ion exchange in a nitrate solution and sodalime glass as substrate (microscope slides). The measurements were accomplished by direct HeNe laser beam incidence and sensing the reflected light by a Thorlabs Dec110 optical detector linked to a Protek500 digital multimeter. A LabView virtual instrument was implemented for the automation of the measurement process. The effective indexes measured have been used to calculate the refractive index profile by IWKB method. A comparison with other results shows that our experimental setup is suitable for slab waveguide modes characterization. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Minimum capital requirement calculations for UK futures

THE JOURNAL OF FUTURES MARKETS, Issue 2 2004
John Cotter
Key to the imposition of appropriate minimum capital requirements on a daily basis is accurate volatility estimation. Here, measures are presented based on discrete estimation of aggregated high-frequency UK futures realizations underpinned by a continuous time framework. Squared and absolute returns are incorporated into the measurement process so as to rely on the quadratic variation of a diffusion process and be robust in the presence of fat tails. The realized volatility estimates incorporate the long memory property. The dynamics of the volatility variable are adequately captured. Resulting rescaled returns are applied to minimum capital requirement calculations. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:193,220, 2004 [source]


Implicit Value Judgments in the Measurement of Health Inequalities

THE MILBANK QUARTERLY, Issue 1 2010
SAM HARPER
Context: Quantitative estimates of the magnitude, direction, and rate of change of health inequalities play a crucial role in creating and assessing policies aimed at eliminating the disproportionate burden of disease in disadvantaged populations. It is generally assumed that the measurement of health inequalities is a value-neutral process, providing objective data that are then interpreted using normative judgments about whether a particular distribution of health is just, fair, or socially acceptable. Methods: We discuss five examples in which normative judgments play a role in the measurement process itself, through either the selection of one measurement strategy to the exclusion of others or the selection of the type, significance, or weight assigned to the variables being measured. Findings: Overall, we find that many commonly used measures of inequality are value laden and that the normative judgments implicit in these measures have important consequences for interpreting and responding to health inequalities. Conclusions: Because values implicit in the generation of health inequality measures may lead to radically different interpretations of the same underlying data, we urge researchers to explicitly consider and transparently discuss the normative judgments underlying their measures. We also urge policymakers and other consumers of health inequalities data to pay close attention to the measures on which they base their assessments of current and future health policies. [source]


Misclassification rates, critical values and size of the design in measurement systems capability studies

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 5 2009
D. Zappa
Abstract Measurement systems capability analysis aims to test if the variability of a measurement system is small relative to the variability of a monitored process. At present some open questions are related both to the interpretation of the critical values of the indices typically used by practitioners to assess the capability of a gauge and to the choice of the size of the experimental design to test the repeatability and the reproducibility of the measurement process. In this paper, starting from the misclassification rates of a measurement system, we present a solution to these issues. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Mixed-Effect Hybrid Models for Longitudinal Data with Nonignorable Dropout

BIOMETRICS, Issue 2 2009
Ying Yuan
Summary Selection models and pattern-mixture models are often used to deal with nonignorable dropout in longitudinal studies. These two classes of models are based on different factorizations of the joint distribution of the outcome process and the dropout process. We consider a new class of models, called mixed-effect hybrid models (MEHMs), where the joint distribution of the outcome process and dropout process is factorized into the marginal distribution of random effects, the dropout process conditional on random effects, and the outcome process conditional on dropout patterns and random effects. MEHMs combine features of selection models and pattern-mixture models: they directly model the missingness process as in selection models, and enjoy the computational simplicity of pattern-mixture models. The MEHM provides a generalization of shared-parameter models (SPMs) by relaxing the conditional independence assumption between the measurement process and the dropout process given random effects. Because SPMs are nested within MEHMs, likelihood ratio tests can be constructed to evaluate the conditional independence assumption of SPMs. We use data from a pediatric AIDS clinical trial to illustrate the models. [source]


Implementing Evaluation of the Measurement Process in an Automotive Manufacturer: a Case Study

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2003
Graeme Knowles
Abstract Reducing process variability is presently an area of much interest in manufacturing organizations. Programmes such as Six Sigma robustly link the financial performance of the organization to the degree of variability present in the processes and products of the organization. Data, and hence measurement processes, play an important part in driving such programmes and in making key manufacturing decisions. In many organizations, however, little thought is given to the quality of the data generated by such measurement processes. By using potentially flawed data in making fundamental manufacturing decisions, the quality of the decision-making process is undermined and, potentially, significant costs are incurred. Research in this area is sparse and has concentrated on the technicalities of the methodologies available to assess measurement process capability. Little work has been done on how to operationalize such activities to give maximum benefit. From the perspective of one automotive company, this paper briefly reviews the approaches presently available to assess the quality of data and develops a practical approach, which is based on an existing technical methodology and incorporates simple continuous improvement tools within a framework which facilitates appropriate improvement actions for each process assessed. A case study demonstrates the framework and shows it to be sound, generalizable and highly supportive of continuous improvement goals. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Workers' compensation in Canada: a case for greater public accountability

CANADIAN PUBLIC ADMINISTRATION/ADMINISTRATION PUBLIQUE DU CANADA, Issue 1 2000
Therese Jennissen
The changing nature of occupational risks has created a range of workplace injuries against which current workers' compensation programs do not adequately insure. The existence of workers' compensation alongside the other components of the social-safety net may have created significant numbers of individuals who are either not receiving compensation when they should be or are receiving compensation when they should not be. The implication is that other programs bear some of the costs that should be borne by workers' compensation and, conversely, that some of the costs borne by workers' compensation should be borne by other social programs. These "gaps and overlaps" indicate that workers' compensation should be better integrated with the rest of the programs that make up the Canadian social-safety net. The article concludes with a menu of reforms, including the establishment, through legislation, of a formal reporting relationship; changes to the composition and size of governance structures; the introduction of strategic planning; and the establishment of performance measurement processes. Sommaire: Selon les auteurs de cet article, les politiques concernant les accidents du travail au Canada devraient relever davantage des gouvernements élus. L'évolution des risques professionnels a Créé toute une gamme d'accidents du travail pour lesquels l'assurance des régimes actuels d'indemnisation est inadéquate. L'existence des régimes d'assurance contre les accidents du travail parallèlement aux autres éléments de sécurité sociale aurait pour effet de multiplier le nombre de personnes quisoit ne reçoivent pas de prestations lorsqu'elles devraient en recevoir, soit l'inverse. Par conséquent, d'autres programmes défraient certains des coûts qui incombent au régime des accidents du travail, tandis que ce dernier défraie des coûts imputables a d'autres programmes sociaux. Ces lacunes et chevauchements indiquent que le régime d'assurance contre les accidents du travail devrait être mieux intégré au reste des programmes qui constituent le filet de sécurité sociale au Canada. L'article propose une série de réformes, dont l'adoption légiférée d'une relation formelle de compte rendu, la modification de la composition et de la taille des structures de gouvernance, l'adoption de la planification stratégique, et l'établissement de processus de mesure du rendement. [source]