Practical Situations (practical + situation)

Distribution by Scientific Domains


Selected Abstracts


A versatile software tool for the numerical simulation of fluid flow and heat transfer in simple geometries

COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2010
A. M. G. Lopes
Abstract The present work describes a software tool aimed at the simulation of fluid flow and heat transfer for two-dimensional problems in a structured Cartesian grid. The software deals with laminar and turbulent situations in steady-state or transient regime. An overview is given on the theoretical principles and on the utilization of the program. Results for some test cases are presented and compared with benchmarking solutions. Although EasyCFD is mainly oriented for educational purposes, it may be a valuable tool for a first analysis of practical situations. EasyCFD is available at www.easycfd.net. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 14,27, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20230 [source]


Semiparametric variance-component models for linkage and association analyses of censored trait data

GENETIC EPIDEMIOLOGY, Issue 7 2006
G. Diao
Abstract Variance-component (VC) models are widely used for linkage and association mapping of quantitative trait loci in general human pedigrees. Traditional VC methods assume that the trait values within a family follow a multivariate normal distribution and are fully observed. These assumptions are violated if the trait data contain censored observations. When the trait pertains to age at onset of disease, censoring is inevitable because of loss to follow-up and limited study duration. Censoring also arises when the trait assay cannot detect values below (or above) certain thresholds. The latent trait values tend to have a complex distribution. Applying traditional VC methods to censored trait data would inflate type I error and reduce power. We present valid and powerful methods for the linkage and association analyses of censored trait data. Our methods are based on a novel class of semiparametric VC models, which allows an arbitrary distribution for the latent trait values. We construct appropriate likelihood for the observed data, which may contain left or right censored observations. The maximum likelihood estimators are approximately unbiased, normally distributed, and statistically efficient. We develop stable and efficient numerical algorithms to implement the corresponding inference procedures. Extensive simulation studies demonstrate that the proposed methods outperform the existing ones in practical situations. We provide an application to the age at onset of alcohol dependence data from the Collaborative Study on the Genetics of Alcoholism. A computer program is freely available. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


A novel blind super-resolution technique based on the improved Poisson maximum a posteriori algorithm

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2002
Min-Cheng Pan
Abstract Image restoration has received considerable attention. In many practical situations, unfortunately, the blur is often unknown, and little information is available about the true image. Therefore, the true image is identified directly from the corrupted image by using partial or no information about the blurring process and the true image. In addition, noise will be amplified to induce severely ringing artifacts in the process of restoration. This article proposes a novel technique for the blind super-resolution, whose mechanism alternates between de-convolution of the image and the point spread function based on the improved Poisson maximum a posteriori super-resolution algorithm. This improved Poisson MAP super-resolution algorithm incorporates the functional form of a Wiener filter into the Poisson MAP algorithm operating on the edge image further to reduce noise effects and speed restoration. Compared with that based on the Poisson MAP, the novel blind super-resolution technique presents experimental results from 1-D signals and 2-D images corrupted by Gaussian point spread functions and additive noise with significant improvements in quality. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 12, 239,246, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10032 [source]


Median-based aggregation operators for prototype construction in ordinal scales

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2003
Josep Domingo-Ferrer
This article studies aggregation operators in ordinal scales for their application to clustering (more specifically, to microaggregation for statistical disclosure risk). In particular, we consider these operators in the process of prototype construction. This study analyzes main aggregation operators for ordinal scales [plurality rule, medians, Sugeno integrals (SI), and ordinal weighted means (OWM), among others] and shows the difficulties for their application in this particular setting. Then, we propose two approaches to solve the drawbacks and we study their properties. Special emphasis is given to the study of monotonicity because the operator is proven nonsatisfactory for this property. Exhaustive empirical work shows that in most practical situations, this cannot be considered a problem. © 2003 Wiley Periodicals, Inc. [source]


Physical,statistical methods for determining state transition probabilities in mobile-satellite channel models

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 3 2001
S. R. Saunders
Abstract Signal propagation in land mobile satellite (LMS) communication systems has for the last decade become an essential consideration, especially when high-rate data services are involved. As far as urban or suburban built-up areas are concerned, the extent of the influence is mainly restricted to the roadside obstacles, since the satellite is positioned at relatively high elevation angles in most practical situations. Probably, the most common model currently used for representing the LMS channel is the Lutz model, which uses two states to represent line-of-sight and non-line-of-sight conditions. Transitions between these states are described by transition probabilities which are a function of the environment and the satellite elevation angles. Similarly, an extension to the model allows a four-state description to be used for the states associated with a pair of satellites used in a dual-diversity configuration. Calculation of the transition probabilities then requires knowledge of the correlation between the two channels, which in turn depends on the spatial characteristics of the local environment around the mobile. In both cases, the transition probabilities have been derived basically from measurements in the past. In the new approaches described in this paper, physical,statistical principles are applied to construct analytical formulas for the probabilities of shadowing and the correlation between states. These expressions apply particularly to systems operated in built-up environments, and have been checked against numerical experiments and against direct measurements. In both cases excellent agreement is obtained. Copyright © 2001 John Wiley & Sons, Ltd. [source]


An Approach to Evaluating the Missing Data Assumptions of the Chain and Post-stratification Equating Methods for the NEAT Design

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2008
Paul W. Holland
Two important types of observed score equating (OSE) methods for the non-equivalent groups with Anchor Test (NEAT) design are chain equating (CE) and post-stratification equating (PSE). CE and PSE reflect two distinctly different ways of using the information provided by the anchor test for computing OSE functions. Both types of methods include linear and nonlinear equating functions. In practical situations, it is known that the PSE and CE methods will give different results when the two groups of examinees differ on the anchor test. However, given that both types of methods are justified as OSE methods by making different assumptions about the missing data in the NEAT design, it is difficult to conclude which, if either, of the two is more correct in a particular situation. This study compares the predictions of the PSE and CE assumptions for the missing data using a special data set for which the usually missing data are available. Our results indicate that in an equating setting where the linking function is decidedly non-linear and CE and PSE ought to be different, both sets of predictions are quite similar but those for CE are slightly more accurate. [source]


On the continuum approximation of large reaction mixtures

AICHE JOURNAL, Issue 7 2010
Teh C. Ho
Abstract In analyzing a reaction mixture of very many components, treating the mixture as a continuum can produce results of generality. In many practical situations (e.g., hydrodesulfurization), it is highly desirable to predict the overall behavior of the mixture at large times (high conversions) with minimum information on the mixture property. For irreversible first-order reactions in a plug-flow reactor, it was previously shown that the continuum approximation cannot be valid at arbitrarily large times. This work is an investigation of the validity of the approximation for mixtures with complex kinetics. It is found that the approximation can be conditionally or universally valid, depending on kinetics, reactor type, pore diffusion, and mixture properties. The validity conditions for a variety of situations, nontrivial as they may seem, take a power-law form. Backmixing and pore diffusion widen the range of validity. The underlying physics and some dichotomies/subtleties are discussed. The results are applied to catalytic hydroprocessing in petroleum refining. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


A family of measures to evaluate scale reliability in a longitudinal setting

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009
Annouschka Laenen
Summary., The concept of reliability denotes one of the most important psychometric properties of a measurement scale. Reliability refers to the capacity of the scale to discriminate between subjects in a given population. In classical test theory, it is often estimated by using the intraclass correlation coefficient based on two replicate measurements. However, the modelling framework that is used in this theory is often too narrow when applied in practical situations. Generalizability theory has extended reliability theory to a much broader framework but is confronted with some limitations when applied in a longitudinal setting. We explore how the definition of reliability can be generalized to a setting where subjects are measured repeatedly over time. On the basis of four defining properties for the concept of reliability, we propose a family of reliability measures which circumscribes the area in which reliability measures should be sought. It is shown how different members assess different aspects of the problem and that the reliability of the instrument can depend on the way that it is used. The methodology is motivated by and illustrated on data from a clinical study on schizophrenia. On the basis of this study, we estimate and compare the reliabilities of two different rating scales to evaluate the severity of the disorder. [source]


Measurement error modelling with an approximate instrumental variable

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2007
Paul Gustafson
Summary., Consider using regression modelling to relate an exposure (predictor) variable to a disease outcome (response) variable. If the exposure variable is measured with error, but this error is ignored in the analysis, then misleading inferences can result. This problem is well known and has spawned a large literature on methods which adjust for measurement error in predictor variables. One theme is that the requisite assumptions about the nature of the measurement error can be stronger than what is actually known in many practical situations. In particular, the assumptions that are required to yield a model which is formally identified from the observable data can be quite strong. The paper deals with one particular strategy for measurement error modelling, namely that of seeking an instrumental variable, i.e. a covariate S which is associated with exposure and conditionally independent of the outcome given exposure. If these two conditions hold exactly, then we call S an exact instrumental variable, and an identified model results. However, the second is not checkable empirically, since the actual exposure is unobserved. In practice then, investigators typically seek a covariate which is plausibly thought to satisfy it. We study inferences which acknowledge the approximate nature of this assumption. In particular, we consider Bayesian inference with a prior distribution that posits that S is probably close to conditionally independent of outcome given exposure. We refer to this as an approximate instrumental variable assumption. Although the approximate instrumental variable assumption is more realistic for most applications, concern arises that a non-identified model may result. Thus the paper contrasts inferences arising from the approximate instrumental variable assumption with their exact instrumental variable counterparts, with particular emphasis on the benefit of basing inferences on a more realistic model versus the cost of basing inferences on a non-identified model. [source]


Improving Comprehension Through Discourse Processing

NEW DIRECTIONS FOR TEACHING & LEARNING, Issue 89 2002
Arthur C. Graesser
Deep coherent explanations organize shallow knowledge and fortify learners for generating inferences, solving problems, reasoning, and applying their knowledge to practical situations. [source]


Optimal Design of VSI ,X Control Charts for Monitoring Correlated Samples

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 8 2005
Yan-Kwang Chen
Abstract This paper develops an economic design of variable sampling interval (VSI),X control charts in which the next sample is taken sooner than usual if there is an indication that the process is off-target. When designing VSI,X control charts, the underlying assumption is that the measurements within a sample are independent. However, there are many practical situations that violate this hypothesis. Accordingly, a cost model combining the multivariate normal distribution model given by Yang and Hancock with Bai and Lee's cost model is proposed to develop the design of VSI charts for correlated data. An evolutionary search method to find the optimal design parameters for this model is presented. Also, we compare VSI and traditional ,X charts with respect to expected cost per unit time, utilizing hypothetical cost and process parameters as well as various correlation coefficients. The results indicate that VSI control charts outperform the traditional control charts for larger mean shift when correlation is present. In addition, there is a difference between the design parameters of VSI charts when correlation is present or absent. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Sensitive determination of bromine and iodine in aqueous and biological samples by electrothermal vaporization inductively coupled plasma mass spectrometry using tetramethylammonium hydroxide as a chemical modifier

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 12 2008
Hiroko Kataoka
A procedure for the simultaneous determination of bromine and iodine by inductively coupled plasma (ICP) mass spectrometry was investigated. In order to prevent the decrease in the ionization efficiencies of bromine and iodine atoms caused by the introduction of water mist, electrothermal vaporization was used for sample introduction into the ICP mass spectrometer. To prevent loss of analytes during the drying process, a small amount of tetramethylammonium hydroxide solution was placed as a chemical modifier into the tungsten boat furnace. After evaporation of the solvent, the analytes instantly vaporized and were then introduced into the ICP ion source to detect the 79Br+, 81Br+, and 127I+ ions. By using this system, detection limits of 0.77,pg and 0.086,pg were achieved for bromine and iodine, respectively. These values correspond to 8.1,pg,mL,1 and 0.91,pg,mL,1 of the aqueous bromide and iodide ion concentrations, respectively, for a sampling volume of 95,µL. The relative standard deviations for eight replicate measurements were 2.2% and 2.8% for 20,pg of bromine and 2,pg of iodine, respectively. Approximately 25 batches were vaporizable per hour. The method was successfully applied to the analysis of various certified reference materials and practical situations as biological and aqueous samples. There is further potential for the simultaneous determination of fluorine and chlorine. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Bayesian Optimal Designs for Phase I Clinical Trials

BIOMETRICS, Issue 3 2003
Linda M. Haines
Summary. A broad approach to the design of Phase I clinical trials for the efficient estimation of the maximum tolerated dose is presented. The method is rooted in formal optimal design theory and involves the construction of constrained Bayesian c - and D -optimal designs. The imposed constraint incorporates the optimal design points and their weights and ensures that the probability that an administered dose exceeds the maximum acceptable dose is low. Results relating to these constrained designs for log doses on the real line are described and the associated equivalence theorem is given. The ideas are extended to more practical situations, specifically to those involving discrete doses. In particular, a Bayesian sequential optimal design scheme comprising a pilot study on a small number of patients followed by the allocation of patients to doses one at a time is developed and its properties explored by simulation. [source]


Impact of freezing on pH of buffered solutions and consequences for monoclonal antibody aggregation

BIOTECHNOLOGY PROGRESS, Issue 3 2010
Parag Kolhe
Abstract Freezing of biologic drug substance at large scale is an important unit operation that enables manufacturing flexibility and increased use-period for the material. Stability of the biologic in frozen solutions is associated with a number of issues including potentially destabilizing pH changes. The pH changes arise from temperature-associated change in the pKas, solubility limitations, eutectic crystallization, and cryoconcentration. The pH changes for most of the common protein formulation buffers in the frozen state have not been systematically measured. Sodium phosphate buffer, a well-studied system, shows the greatest change in pH when going from +25 to ,30°C. Among the other buffers, histidine hydrochloride, sodium acetate, histidine acetate, citrate, and succinate, less than 1 pH unit change (increase) was observed over the temperature range from +25 to ,30°C, whereas Tris-hydrochloride had an ,1.2 pH unit increase. In general, a steady increase in pH was observed for all these buffers once cooled below 0°C. A formulated IgG2 monoclonal antibody in histidine buffer with added trehalose showed the same pH behavior as the buffer itself. This antibody in various formulations was subject to freeze/thaw cycling representing a wide process (phase transition) time range, reflective of practical situations. Measurement of soluble aggregates after repeated freeze,thaw cycles shows that the change in pH was not a factor for aggregate formation in this case, which instead is governed by the presence or absence of noncrystallizing cryoprotective excipients. In the absence of a cryoprotectant, longer phase transition times lead to higher aggregation. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source]