Actual Data (actual + data)

Distribution by Scientific Domains


Selected Abstracts


Simulating growth of the h-index

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 2 2009
Raf Guns
Temporal growth of the h-index in a diachronous cumulative time series is predicted to be linear by Hirsch (2005), whereas other models predict a concave increase. Actual data generally yield a linear growth or S-shaped growth. We study the h-index's growth in computer simulations of the publication-citation process. In most simulations the h-index grows linearly in time. Only occasionally does an S-shape occur, while in our simulations a concave increase is very rare. The latter is often signalled by the occurrence of plateaus,periods of h-index stagnation. Several parameters and their influence on the h-index's growth are determined and discussed. [source]


Gender difference, sex hormones, and immediate type hypersensitivity reactions

ALLERGY, Issue 11 2008
W. Chen
Gender differences in the development and prevalence of human diseases have long been recognized. Immense interest grows in the understanding of the role of sex hormones in the homeostasis of immunity. Asthma predominates in boys before puberty and this gender preference reverses after puberty and in adulthood, when adult women tend to have a more severe disease, often recalcitrant to treatment. Atopic eczema in preschool children shows insignificant gender difference or male preponderance in different studies, with more adult females suffering from atopic eczema. The limited data on the prevalence of immediate hypersensitivity to hymenoptera venom show controversial results. Discrepancy exists regarding the gender difference in food allergy, with females reporting significantly more allergic reactions in questionnaire studies. In general, adverse reactions to nonionic iodinated radiocontrast media are more commonly observed in females. The course of allergic diseases varies unpredictably during pregnancy, whereas hormone replacement therapy in postmenopausal women usually has a favorable influence on the course of asthma. Experiments in rodents confirm an effect of estrogens on mast cell activation and allergic sensitization, while progesterone is shown to suppress histamine release but potentiate IgE induction. Dehydroepiandrosterone may antagonize the production of Th2 cytokines but the effect of testosterone and the other androgens remains less defined. Actual data from human studies are lacking. [source]


The Impact of Uncertainty Shocks

ECONOMETRICA, Issue 3 2009
Nicholas Bloom
Uncertainty appears to jump up after major shocks like the Cuban Missile crisis, the assassination of JFK, the OPEC I oil-price shock, and the 9/11 terrorist attacks. This paper offers a structural framework to analyze the impact of these uncertainty shocks. I build a model with a time-varying second moment, which is numerically solved and estimated using firm-level data. The parameterized model is then used to simulate a macro uncertainty shock, which produces a rapid drop and rebound in aggregate output and employment. This occurs because higher uncertainty causes firms to temporarily pause their investment and hiring. Productivity growth also falls because this pause in activity freezes reallocation across units. In the medium term the increased volatility from the shock induces an overshoot in output, employment, and productivity. Thus, uncertainty shocks generate short sharp recessions and recoveries. This simulated impact of an uncertainty shock is compared to vector autoregression estimations on actual data, showing a good match in both magnitude and timing. The paper also jointly estimates labor and capital adjustment costs (both convex and nonconvex). Ignoring capital adjustment costs is shown to lead to substantial bias, while ignoring labor adjustment costs does not. [source]


WHY DOES A METHOD THAT FAILS CONTINUE TO BE USED?

EVOLUTION, Issue 4 2009
THE ANSWER
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single-locus NCPA is used or when the 2002 multilocus version of NCPA is used. It is shown that the tree-wise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. [source]


The effect of pitting corrosion on fatigue life

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 7 2000
Dolley
Fatigue testing of pre-pitted 2024-T3 aluminium alloy specimens is performed in laboratory air at 22 °C and 40% RH to characterize the effect of pitting corrosion on fatigue life. Specimens, pre-corroded in a 0.5 M NaCl solution from 48 to 384 h, have fatigue lives that are reduced by more than one order of magnitude after 384 h pre-corrosion as compared to those of uncorroded specimens. The reduction in fatigue life is interpreted in terms of the influence of the time of exposure to the corrosive environment or pit size. The crack-nucleating pit sizes, ranging from 20 to 70 ,m, are determined from post-fracture examinations by scanning electron microscopy. Fatigue lives are estimated using a fracture mechanics approach and are shown to be in good agreement with the actual data. A probabilistic analysis shows that the distribution of fatigue life is strongly correlated to the distribution in nucleating pit size. [source]


A discourse-analytic approach to the use of English in Cypriot Greek conversations

INTERNATIONAL JOURNAL OF APPLIED LINGUISTICS, Issue 2 2001
Dionysis Goutsos
The use of English in Cypriot Greek has been a highly contested issue, involving much speculation and prescription but, as yet, little analysis of actual data. This study is a preliminary exploration of the issue, focusing on extensive data from informal conversations between members of a Limassol family. The analysis suggests that instances of language alternation can be accounted for in terms of discourse analytic categories such as the distinction between local and global phenomena and the tri-partite scheme of ideational, interpersonal and sequential functions. The presence of English in Cypriot Greek conversations covers a wide range, from local borrowing to stereotypical sequential or more complex interpersonal and sequential phenomena, and cannot be effectively separated from the role that language alternation plays in speci ?c textual and contextual settings. The discussion suggests that a discourse analytic approach is an indispensable means of studying language alternation phenomena. [source]


Performance improvements for olive oil refining plants

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2010
Elif Bozoglan
Abstract The main objective of this study, which is conducted for the first time to the best of the authors' knowledge, is to identify improvements in olive oil refinery plants' performance. In the analyses, the actual operational data are used for performance assessment purposes. The refinery plant investigated is located in Izmir Turkey and has an oil capacity of 6250,kg,h,1. It basically incorporates steam generators, several tanks, heat exchangers, a distillation column, flash tanks and several pumps. The values for exergy efficiency and exergy destruction of operating components are determined based on a reference (dead state) temperature of 25°C. An Engineering Equation Solver (EES) software program is utilized to do the analyses of the plant. The exergy transports between the components and the consumptions in each of the components of the whole plant are determined for the average parameters obtained from the actual data. The exergy loss and flow diagram (the so-called Grassmann diagram) are also presented for the entire plant studied to give quantitative information regarding the proportion of the exergy input that is dissipated in the various plant components. Among the observed components in the plant, the most efficient equipment is found to be the shell- and tube-type heat exchanger with an exergy efficiency value of 85%. The overall exergetic efficiency performance of the plant (the so-called functional exergy efficiency) is obtained to be about 12%, while the exergy efficiency value on the exergetic fuel,product basis is calculated to be about 65%. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Potential for protein deposition and threonine requirement of modern genotype barrows fed graded levels of protein with threonine as the limiting amino acid

JOURNAL OF ANIMAL PHYSIOLOGY AND NUTRITION, Issue 5-6 2004
H. T. Thong
Summary The study was conducted to estimate the actual genetic potential for daily protein deposition of growing barrows [genotype: Piétrain × (Duroc × Landrace)]. Twenty-four pigs with an average initial body weight (BW) of 43.7 ± 0.7 kg were used in a N-balance study. Semi-purified diets with graded levels of crude protein (45.8, 94.2, 148.0, 198.9, 255.5 and 300.2 g/kg DM) were used, based on a constant mixture of wheat, soya bean protein concentrate and potato protein concentrate as protein sources. The amino acid pattern of the diets was according to the ideal amino acid ratio for growing pigs (Wang and Fuller, 1989), with the exception of threonine (adjusted as limiting amino acid). N-balance data were used to estimate daily N-maintenance requirement (NMR = 446 mg N/BW/day) by regression method and the theoretical maximum of daily N-retention (PDmaxT = 3115 mg N/BW/day) based on N-utilization model of Gebhardt (1966) using program Mathematica 3.0. The results indicate that PDmaxT of pigs under study is much higher than results from earlier studies with older genotypes. In summary, pigs of modern genotype have a very high genetic potential for daily protein deposition and these actual data are important basic informations for estimation of amino acid requirement within the model used. Threonine requirement data depending on threonine efficieny and protein deposition (8.96, 10.45 and 12.22 g/day for 130, 145 and 160 g daily protein deposition; 50 kg body weight) are discussed. [source]


Measuring predictability: theory and macroeconomic applications

JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2001
Francis X. Diebold
We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non-parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Overall efficiency evaluation of commercial distillation columns with valve and dualflow trays

AICHE JOURNAL, Issue 9 2010
T. L. Domingues
Abstract The main objective of this work is to establish appropriated ways for estimating the overall efficiencies of industrial distillation columns with valve trays with downcomer and dualflow trays. The knowledge of efficiencies has fundamental importance in the design and performance evaluation of distillation columns. Searching in the literature, a tree of alternatives was identified to compose the tray efficiency model, depending on the mass transfer models, the liquid distribution and vapor flow models on the tray, the liquid entrainment model, the multicomponent mixture equilibrium model, the physical properties models, the height of froth on the tray model and the efficiency definition. In this work, different methods to predict the overall efficiency of distillation columns with valve and dualflow trays were composed and compared with data from three commercial distillation columns under different operating conditions. The models were inserted in the Aspen Plus 12.1 simulator, in Fortran language, together with tray geometrical data, fluid properties and operating data of the distillation columns. For each column, the best thermodynamic package was chosen by checking the temperature profile and overhead and bottom compositions obtained via simulation against the corresponding actual data of industrial columns. A modification in the fraction of holes evaluation that is jetting parameter of the Garcia's hydraulic model of dispersion above the tray was proposed. This modification produced better results than the original model to predict the fraction of holes that are jetting and in the efficiency of dualflow trays and similar results to Garcia model in the efficiency evaluation of valve trays. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


Users want more sophisticated search assistants: Results of a task-based evaluation

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 13 2005
Udo Kruschwitz
The Web provides a massive knowledge source, as do intranets and other electronic document collections. However, much of that knowledge is encoded implicitly and cannot be applied directly without processing into some more appropriate structures. Searching, browsing, question answering, for example, could all benefit from domain-specific knowledge contained in the documents, and in applications such as simple search we do not actually need very "deep" knowledge structures such as ontologies, but we can get a long way with a model of the domain that consists of term hierarchies. We combine domain knowledge automatically acquired by exploiting the documents' markup structure with knowledge extracted on the fly to assist a user with ad hoc search requests. Such a search system can suggest query modification options derived from the actual data and thus guide a user through the space of documents. This article gives a detailed account of a task-based evaluation that compares a search system that uses the outlined domain knowledge with a standard search system. We found that users do use the query modification suggestions proposed by the system. The main conclusion we can draw from this evaluation, however, is that users prefer a system that can suggest query modifications over a standard search engine, which simply presents a ranked list of documents. Most interestingly, we observe this user preference despite the fact that the baseline system even performs slightly better under certain criteria. [source]


Time Deformation, Continuous Euler Processes and Forecasting

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2006
Chu-Ping C. Vijverberg
Abstract., A continuous Euler model has time-varying coefficients. Through a logarithmic time transformation, a continuous Euler model can be transformed to a continuous autoregressive (AR) model. By using the continuous Kalman filtering through the Laplace method, this article explores the data application of a continuous Euler process. This time deformation of an Euler process deforms specific time-variant (non-stationary) behaviour to time-invariant (stationary) data on the deformed time scale. With these time-invariant data on the transformed time scale, one may use traditional tools to conduct parameter estimation and forecasts. The obtained results can then be transformed back to the original time scale. Simulated data and actual data such as bat echolocation and the US residential investment growth are used to demonstrate the usefulness of time deformation in forecasting. The results indicate that fitting a traditional autoregressive moving-average (ARMA) model on an Euler data set without imposing time transformation leads to forecasts that are out of phase while the forecasts of an Euler model stay mostly in phase. [source]


An Efficient Taper for Potentially Overdifferenced Long-memory Time Series

JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2000
Clifford M. Hurvich
We propose a new complex-valued taper and derive the properties of a tapered Gaussian semiparametric estimator of the long-memory parameter d, (,0.5, 1.5). The estimator and its accompanying theory can be applied to generalized unit root testing. In the proposed method, the data are differenced once before the taper is applied. This guarantees that the tapered estimator is invariant with respect to deterministic linear trends in the original series. Any detrimental leakage effects due to the potential noninvertibility of the differenced series are strongly mitigated by the taper. The proposed estimator is shown to be more efficient than existing invariant tapered estimators. Invariance to kth order polynomial trends can be attained by differencing the data k times and then applying a stronger taper, which is given by the kth power of the proposed taper. We show that this new family of tapers enjoys strong efficiency gains over comparable existing tapers. Analysis of both simulated and actual data highlights potential advantages of the tapered estimator of d compared with the nontapered estimator. [source]


Coupled Single-Particle and Population Balance Modeling for Particle Size Distribution of Poly(propylene) Produced in Loop Reactors

MACROMOLECULAR REACTION ENGINEERING, Issue 2 2010
Zheng-Hong Luo
Abstract A comprehensive model was developed for the PSD of PP produced in loop reactors. The polymeric multilayer model (PMLM) was first applied to calculate the single particle growth rate under intraparticle transfer limitations. In order to obtain the comprehensive model, the PMLM was solved together with a steady-state particle population equation to predict the PSD in the loop reactors. The simulated PSD data obtained under steady-state polymerization conditions agreed with the actual data collected from industrial scale plant. The comprehensive model was also used to predict the effects of some critical factors, including the intraparticle mass and heat transfer limitations, the feed catalyst particle size and the catalyst deactivation, etc., on the PSD. [source]


Precision cosmology with voids: definition, methods, dynamics

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2010
Guilhem Lavaux
ABSTRACT We propose a new definition of cosmic voids based on methods of Lagrangian orbit reconstruction as well as an algorithm to find them in actual data called DynamIcal Void Analysis. Our technique is intended to yield results which can be modelled sufficiently accurately to create a new probe of precision cosmology. We then develop an analytical model of the ellipticity of voids found by our method based on the Zel'dovich approximation. We measure in N -body simulation that this model is precise at the ,0.1 per cent level for the mean ellipticity of voids of size greater than ,4 h,1 Mpc. We estimate that at this scale we are able to predict the ellipticity with an accuracy of ,,, 0.02. Finally, we compare the distribution of void shapes in N -body simulation for two different equations of state w of the dark energy. We conclude that our method is far more accurate than Eulerian methods and is therefore promising as a precision probe of dark energy phenomenology. [source]


Small-volume resuscitation: from experimental evidence to clinical routine.

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 6 2002
Advantages, disadvantages of hypertonic solutions
Background: The concept of small-volume resuscitatioin (SVR) using hypertonic solutions encompasses the rapid infusion of a small dose (4 ml per kg body weight, i.e. approximately 250 ml in an adult patient) of 7.2,7.5% NaCl/colloid solution. Originally, SVR was aimed for initial therapy of severe hypovolemia and shock associated with trauma. Methods: The present review focusses on the findings concerning the working mechanisms responsible for the rapid onset of the circulatory effect, the impact of the colloid component on microcirculatory resuscitation, and describes the indications for its application in the preclinical scenario as well as perioperatively and in intensive care medicine. Results: With respect to the actual data base of clinical trials SVR seems to be superior to conventional volume therapy with regard to faster normalization of microvascular perfusion during shock phases and early resumption of organ function. Particularly patients with head trauma in association with systemic hypotension appear to benefit. Besides, potential indications for this concept include cardiac and cardiovascular surgery (attenuation of reperfusion injury during declamping phase) and burn injury. The review also describes disadvantaages and potential adverse effects of SVR: Conclusion: Small-volume resuscitation by means of hypertonic NaCl/colloid solutions stands for one of the most innovative concepts for primary resuscitation from trauma and shock established in the past decade. Today the spectrum of potential indications envolves not only prehospital trauma care, but also perioperative and intensive care therapy. [source]


Assessment of a capability index sensitive to skewness

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2001
P. C. Nahar
Abstract For many quality characteristics, such as circularity, cylindricity, straightness and flatness, positive skewness in the inspection data is the norm, and, in fact, is desirable. Summarizing the process performance using such data in conjunction with capability indices has recently received a considerable amount of attention, with new indices being proposed and compared for usefulness and accuracy. This paper is intended to contribute to this growing discussion, and to add a unique focus. In particular, this investigation concentrates on one form of a neoclassical index, the Cs index, originally proposed to be sensitive to skewness and to decrease in value as the skewness increased in the underlying distribution of the data. In other words, ,skewness is badness'. Looking at this index from an altered perspective, the possibility that this index could serve a useful purpose in summarizing process performance for such non-normal processes by merely changing its interpretation or slightly changing its form is considered. Hence, actual data from circularity measurements are used to identify a relevant group of distributions, and then the accuracy of Cs is investigated along with its modified version for this group of distributions. In particular, this investigation includes several Rayleigh and gamma distributions for various sample sizes and reports on the bias of the proposed estimators. These findings indicate that such a modified index has some useful attributes in reflecting process performance, with respect to the percentage of non-conformance and the accuracy for relatively large samples. Copyright © 2001 John Wiley & Sons, Ltd. [source]


The Language of Practical Philosophy

RATIO JURIS, Issue 3 2002
Ota Weinberger
Kant's criticism is based on the idea that all possible knowledge of facts is determined by the immanent structure of our apparatus of cognition, and that therefore we have no access to reality as it is per se ("Ding an sich"). In modern analytical philosophy some elements of this view survived, namely, the distinction between framework construction and actual data of experience, supposition or voluntary setting. The conditio humana is characterised by our capacity of acting. Acting is defined as behaviour determined and controlled by information processes. The structure of these processes defines the semantics and logical principles of practical philosophy. From this view follows the conception of value judgments, the logic of preferences, formal teleology, the analysis of utility and norm logic. The framework theories should be open in order to be able to express all possible theoretical views, namely, subjectivism as well as objectivism. The paper gives a concise account of the systems of practical thought (formal axiology, formal teleology, preference logic and norm logic) and their gnoseological problems. [source]


Secular Trends in the Incidence of Female Breast Cancer in the United States, 1973,1998

THE BREAST JOURNAL, Issue 2 2004
Kiumarss Nasseri DVM
Abstract: , Statistical modeling suggests a causal association between the rapid increase in the incidence of female breast cancer (FBC) in the United States and the widespread use of screening mammography. Additional support for this suggestion is a shift in the stage at diagnosis that consists of an increase in early stage diagnosis followed by a decrease in late-stage diagnosis. This has not been reported in the United States. The objective of this study was to examine the secular trends in the incidence of FBC in search of empirical support for this shift. FBC cases in the Surveillance, Epidemiology, and End Results (SEER) database from 1973 through 1998 were dichotomized into early and late detection based. Early detection included all the in situ and invasive cases with local spread. Late detection included cases with regional spread and distant metastasis. Joinpoint segmented regression modeling was used for trend analysis. Early detection in white and black women followed a similar pattern of significant increase in the early 1980s that continued through 1998 with slight modification in 1987. The expected shift in stage was noticed only for white women when the incidence of late detection in them began to decline in 1987. The incidence of late detection in black women has remained stable. These results provide further support for the previously implied causal association between the use of screening mammography and the increased incidence of FBC in the United States. It also shows that the expected stage shift appeared in white women 50,69 years of age after an estimated detection lead time (DLT) of about 5 years. This is the first estimate of DLT in the United States that is based on actual data. The subsequent increase in late detection in white women since 1993 may be due to changes in case management and the increased use of sentinel lymph node biopsy (SLNB) rather than changes in the etiology or biology of FBC., [source]


Curve registration by local regression

THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2000
A. Kneip
Abstract Functional data analysis involves the extension of familiar statistical procedures such as principal-components analysis, linear modelling and canonical correlation analysis to data where the raw observation is a function x, (t). An essential preliminary to a functional data analysis is often the registration or alignment of salient curve features by suitable monotone transformations hi(t). In effect, this conceptualizes variation among functions as being composed of two aspects: phase and amplitude. Registration aims to remove phase variation as a preliminary to statistical analyses of amplitude variation. A local nonlinear regression technique is described for identifying the smooth monotone transformations hi, and is illustrated by analyses of simulated and actual data. L'analyse de données se présentant sous la forme de fonctions x,(t) repose sur la généralisation d'outils statistiques familiers tels que l'analyse en composantes principales, les modèles linéaires et l'analyse des corrélations canoniques. L'étalonnage des caractéristiques saillantes des courbes à l'aide de transformations monotones hi(t) constitue souvent un préiequis essentiel au traitement statistique de telles données. II découle d'une décomposition en deux parties de la variation entre les fonctions observées: une phase et une amplitude. L'étalonnage vise à éliminer la première de ces deux sources de variation, ce qui permet de concentrer ensuite l'analyse sur la seconde. Les auteurs décrivent ici une technique de régression non linéaire locale facilitant l'identification de transformations monotones lisses hi appropriées. Leur propos est illustré à l'aide de données réelles et simulées. [source]


OPTIMAL MECHANISM DESIGN AND DYNAMIC ANALYSIS OF A 3-LEG 6-DOF LINEAR MOTOR BASED PARALLEL MANIPULATOR

ASIAN JOURNAL OF CONTROL, Issue 1 2004
Thong-Shing Hwang
ABSTRACT This paper presents the optimal mechanism design and dynamic analysis of a prototype 3-leg 6-DOF (degree-of-freedom) parallel manipulator. Inverse kinematics, forward kinematics, inverse dynamics and working space characterizing the platform motion are derived. In the presented architecture, the base platform has three linear slideways individually actuated by a synchronous linear servo motor, and each extensible vertical link connecting the upper and base platforms is actuated by an inductive AC servo motor. The linear motors contribute high-speed movements to the upper platform. This kind of architecture using hybrid (linear and AC) motors yields high level performance of motions, especially in the working space. The novel result of maximal working angles is the significant contribution of this architecture. The Taguchi Experimental Method is applied to design the optimal mechanism of the platform system, and the result is used as the actual data to build this system. [source]


Analyzing Incomplete Data Subject to a Threshold using Empirical Likelihood Methods: An Application to a Pneumonia Risk Study in an ICU Setting

BIOMETRICS, Issue 1 2010
Jihnhee Yu
Summary The initial detection of ventilator-associated pneumonia (VAP) for inpatients at an intensive care unit needs composite symptom evaluation using clinical criteria such as the clinical pulmonary infection score (CPIS). When CPIS is above a threshold value, bronchoalveolar lavage (BAL) is performed to confirm the diagnosis by counting actual bacterial pathogens. Thus, CPIS and BAL results are closely related and both are important indicators of pneumonia whereas BAL data are incomplete. To compare the pneumonia risks among treatment groups for such incomplete data, we derive a method that combines nonparametric empirical likelihood ratio techniques with classical testing for parametric models. This technique augments the study power by enabling us to use any observed data. The asymptotic property of the proposed method is investigated theoretically. Monte Carlo simulations confirm both the asymptotic results and good power properties of the proposed method. The method is applied to the actual data obtained in clinical practice settings and compares VAP risks among treatment groups. [source]


Accounting for Multiplicities in Assessing Drug Safety: A Three-Level Hierarchical Mixture Model

BIOMETRICS, Issue 2 2004
Scott M. Berry
Summary. Multiple comparisons and other multiplicities are among the most difficult of problems that face statisticians, frequentists, and Bayesians alike. An example is the analysis of the many types of adverse events (AEs) that are recorded in drug clinical trials. We propose a three-level hierarchical mixed model. The most basic level is type of AE. The second level is body system, each of which contains a number of types of possibly related AEs. The highest level is the collection of all body systems. Our analysis allows for borrowing across body systems, but there is greater potential,depending on the actual data,for borrowing within each body system. The probability that a drug has caused a type of AE is greater if its rate is elevated for several types of AEs within the same body system than if the AEs with elevated rates were in different body systems. We give examples to illustrate our method and we describe its application to other types of problems. [source]


An analysis of variability in the manufacturing of dexosomes: Implications for development of an autologous therapy,

BIOTECHNOLOGY & BIOENGINEERING, Issue 2 2005
Sanjay Patel
Abstract Dexosomes are nanometer-size vesicles released by dendritic-cells, possessing much of the cellular machinery required to stimulate an immune response (i.e. MHC Class I and II). The ability of patient-derived dexosomes loaded with tumor antigens to elicit anti-tumor activity is currently being evaluated in clinical trials. Unlike conventional biologics, where variability between lots of product arises mostly from the manufacturing process, an autologous product has inherent variability in the starting material due to heterogeneity in the human population. In an effort to assess the variability arising from the dexosome manufacturing process versus the human starting material, 144 dexosome preparations from normal donors (111) and cancer patients (33) from two Phase I clinical trials were analyzed. A large variability in the quantity of dexosomes (measured as the number of MHC Class II molecules) produced between individual lots was observed (,>,50-fold). An analysis of intra-lot variability shows that the manufacturing process introduces relatively little of this variability. To identify the source(s) of variability arising from the human starting material, distributions of the key parameters involved in dexosome production were established, and a model created. Computer simulations using this model were performed, and compared to the actual data observed. The main conclusion from these simulations is that the number of cells collected per individual and the productivity of these cells of are the principal sources of variability in the production of Class II. The approach described here can be extended to other autologous therapies in general to evaluate control of manufacturing processes. Moreover, this analysis of process variability is directly applicable to production at a commercial scale, since the large scale manufacture of autologous products entails an exact process replication rather than scale-up in volume, as is the case with traditional drugs or biologics. © 2005 Wiley Periodicals, Inc. [source]


Reactor Modeling of Gas-Phase Polymerization of Ethylene

CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 11 2004
A. Kiashemshaki
Abstract A model is developed for evaluating the performance of industrial-scale gas-phase polyethylene production reactors. This model is able to predict the properties of the produced polymer for both linear low-density and high-density polyethylene grades. A pseudo-homogeneous state was assumed in the fluidized bed reactor based on negligible heat and mass transfer resistances between the bubble and emulsion phases. The nonideal flow pattern in the fluidized bed reactor was described by the tanks-in-series model based on the information obtained in the literature. The kinetic model used in this work allows to predict the properties of the produced polymer. The presented model was compared with the actual data in terms of melt index and density and it was shown that there is a good agreement between the actual and calculated properties of the polymer. New correlations were developed to predict the melt index and density of polyethylene based on the operating conditions of the reactor and composition of the reactants in feed. [source]