Process Data (process + data)

Distribution by Scientific Domains

Kinds of Process Data

  • historical process data


  • Selected Abstracts


    A Hybrid SPC Method with the Chi-Square Distance Monitoring Procedure for Large-scale, Complex Process Data

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2006
    Nong Ye
    Abstract Standard multivariate statistical process control (SPC) techniques, such as Hotelling's T2, cannot easily handle large-scale, complex process data and often fail to detect out-of-control anomalies for such data. We develop a computationally efficient and scalable Chi-Square () Distance Monitoring (CSDM) procedure for monitoring large-scale, complex process data to detect out-of-control anomalies, and test the performance of the CSDM procedure using various kinds of process data involving uncorrelated, correlated, auto-correlated, normally distributed, and non-normally distributed data variables. Based on advantages and disadvantages of the CSDM procedure in comparison with Hotelling's T2 for various kinds of process data, we design a hybrid SPC method with the CSDM procedure for monitoring large-scale, complex process data. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Marginal Mark Regression Analysis of Recurrent Marked Point Process Data

    BIOMETRICS, Issue 2 2009
    Benjamin French
    Summary Longitudinal studies typically collect information on the timing of key clinical events and on specific characteristics that describe those events. Random variables that measure qualitative or quantitative aspects associated with the occurrence of an event are known as marks. Recurrent marked point process data consist of possibly recurrent events, with the mark (and possibly exposure) measured if and only if an event occurs. Analysis choices depend on which aspect of the data is of primary scientific interest. First, factors that influence the occurrence or timing of the event may be characterized using recurrent event analysis methods. Second, if there is more than one event per subject, then the association between exposure and the mark may be quantified using repeated measures regression methods. We detail assumptions required of any time-dependent exposure process and the event time process to ensure that linear or generalized linear mixed models and generalized estimating equations provide valid estimates. We provide theoretical and empirical evidence that if these conditions are not satisfied, then an independence estimating equation should be used for consistent estimation of association. We conclude with the recommendation that analysts carefully explore both the exposure and event time processes prior to implementing a repeated measures analysis of recurrent marked point process data. [source]


    Use of Process Data To Assess Chromatographic Performance in Production-Scale Protein Purification Columns

    BIOTECHNOLOGY PROGRESS, Issue 2 2003
    Tina M. Larson
    Transition analysis was performed on production-scale chromatography data in order to monitor column performance. Analysis of over 300 transitions from several different chromatography operations demonstrated the utility of the techniques presented. Several of the transitions analyzed occurred on columns with known integrity breaches. The techniques proved sensitive for detection of these breaches. Seven transition calculations are presented, which were combined to produce a single overall integrity value for each column. In addition, principal components analysis (PCA) was used to detect shifts in the transition pattern, including those attributed to integrity breaches. Besides detection of integrity breaches, transition analysis proved useful in monitoring column efficiency over multiple column uses. [source]


    Administrative Characteristics of Comprehensive Prenatal Case Management Programs

    PUBLIC HEALTH NURSING, Issue 5 2003
    L. Michele Issel Ph.D., R.N.
    Abstract The purpose of this study was to examine comprehensive prenatal case management programs in terms of organizational, program, and process characteristics. Data from 66 program surveys of government agencies were used. Organizational capacity was measured as extent of organizational change and extent of interagency agreements. Program data included age and size of the program, reasons for having case management, and funding diversity. Process data were eight types of interventions. The most highly rated reason for having case management was to improve client outcomes. The greatest organizational change was in the area of the organizational structure, followed by financial status and types of services provided. Contracts with other agencies were rare. Agencies with more interagency contacts reported higher levels of change in the case management department and turnover among mid-level managers. Older programs had fewer employees. Approximately 49% of client contacts were not billed to Medicaid. Larger programs had significantly less time allocated to emotional support and coaching. Data on organizational characteristics, program, and process variables provide insights into comprehensive case management. Relationships among these variables underscore the importance of studying client outcomes within the context of program and organizational idiosyncrasies. Future studies of comprehensive prenatal case management should focus on cross-level questions. [source]


    Treatment process, alliance and outcome in brief versus extended treatments for marijuana dependence

    ADDICTION, Issue 10 2010
    Carly J. Gibbons
    ABSTRACT Aims The Marijuana Treatment Project, a large multi-site randomized clinical trial, compared a delayed treatment control condition with a brief (two-session) and extended (nine-session) multi-component treatment among 450 marijuana-dependent participants. In this report we present treatment process data, including the fidelity of treatment delivery in the three community-based treatment settings as well as the relationships between treatment process and outcome. Design Independent evaluations of clinician adherence and competence ratings were made based on 633 videotaped sessions from 163 participants. Relationships between clinician adherence and competence, ratings of the working alliance and marijuana treatment outcomes were evaluated. Findings Protocol treatments were implemented with strong fidelity to manual specifications and with few significant differences in adherence and competence ratings across sites. In the brief two-session treatment condition, only the working alliance was associated significantly with frequency of marijuana use, but in the extended treatment therapist ratings of working alliance predicted outcomes, as did the interaction of alliance and curvilinear adherence. Conclusions Behavioral treatments for marijuana use were delivered in community settings with good fidelity. Participant and therapist working alliance scores were associated significantly with improved marijuana use outcomes in a brief behavioral treatment for adults with marijuana dependence. In extended treatment the therapist ratings of working alliance were associated with more positive outcome. However, in that treatment there was also a significant interaction between alliance and curvilinear adherence. [source]


    Monitoring of Machining Processes Using Sensor Equipped Tools,

    ADVANCED ENGINEERING MATERIALS, Issue 7 2010
    Ekkard Brinksmeier
    A different to conventional monitoring systems sensor equipped tools give the possibility to gain information about the process status directly from the contact zone between tool and component to be machined. For example this can be realized by the integration of small temperature sensors into grinding wheels. The transmitting of the process data is performed by a telemetric unit attached to the grinding wheel's core. In this paper, the development of a new thin film thermocouple sensor concept is described. The unique feature of this sensor is the continuous contacting of the thermocouple through the grinding process inherent wear which leads to smearing of the thermoelectric layers and thus forming the measuring junction of a thermocouple. The system was used in OD grinding processes aiming to detect grinding burn and process instabilities. By reducing the volume of the sensors a fast response and high time resolution can be obtained. By this way, observance of the key parameters of the practical operation as closely as possible to the cutting area is enabled and so observance of process efficiency and tool status independent from workpiece machining conditions can be realized. All sensors used are thermocouples of type K, a combination of Chromel (NiCr) and Alumel (NiAlMnSi) material. The maximum temperature to be measured by this sensor is about 1350,°C, which ensures the applicability in the grinding process. Telemetry components to amplify and send the thermovoltage signals are adjusted to this type of thermocouple. The ability of the set-up to detect thermal influences was demonstrated in grinding processes with a continuously increasing specific material removal rate. The approach serves to measure temperatures between fast sliding surfaces in harsh environments (fluids, high pressure, heat), similar to the grinding process. Therefore their application is not limited to tools but also applicable for other rotating components such as bearings, gears and shafts in powertrains. [source]


    Monitoring bacterial and archaeal community shifts in a mesophilic anaerobic batch reactor treating a high-strength organic wastewater

    FEMS MICROBIOLOGY ECOLOGY, Issue 3 2008
    Changsoo Lee
    Abstract Shifts in bacterial and archaeal communities, associated with changes in chemical profiles, were investigated in an anaerobic batch reactor treating dairy-processing wastewater prepared with whey permeate powder. The dynamics of bacterial and archaeal populations were monitored by quantitative real-time PCR and showed good agreement with the process data. A rapid increase in bacterial populations and a high rate of substrate fermentation were observed during the initial period. Growth and regrowth of archaeal populations occurred with biphasic production of methane, corresponding to the diauxic consumption of acetate and propionate. Bacterial community structure was examined by denaturing gel gradient electrophoresis (DGGE) targeting 16S rRNA genes. An Aeromonas -like organism was suggested to be mainly responsible for the rapid fermentation of carbohydrate during the initial period. Several band sequences closely related to the Clostridium species, capable of carbohydrate fermentation, lactate or ethanol fermentation, and/or homoacetogenesis, were also detected. Statistical analyses of the DGGE profiles showed that the bacterial community structure, as well as the process performance, varied with the incubation time. Our results demonstrated that the bacterial community shifted, reflecting the performance changes and, particularly, that a significant community shift corresponded to a considerable process event. This suggested that the diagnosis of an anaerobic digestion process could be possible by monitoring bacterial community shifts. [source]


    Determination of isotope fractionation factors and quantification of carbon flow by stable carbon isotope signatures in a methanogenic rice root model system

    GEOBIOLOGY, Issue 2 2006
    H. PENNING
    ABSTRACT Methanogenic processes can be quantified by stable carbon isotopes, if necessary modeling parameters, especially fractionation factors, are known. Anoxically incubated rice roots are a model system with a dynamic microbial community and thus suitable to investigate principal geochemical processes in anoxic natural systems. Here we applied an inhibitor of acetoclastic methanogenesis (methyl fluoride), calculated the thermodynamics of the involved processes, and analyzed the carbon stable isotope signatures of CO2, CH4, propionate, acetate and the methyl carbon of acetate to characterize the carbon flow during anaerobic degradation of rice roots to the final products CO2 and CH4. Methyl fluoride inhibited acetoclastic methanogenesis and thus allowed to quantify the fractionation factor of CH4 production from H2/CO2. Since our model system was not affected by H2 gradients, the fractionation factor could alternatively be determined from the Gibbs free energies of hydrogenotrophic methanogenesis. The fractionation factor of acetoclastic methanogenesis was also experimentally determined. The data were used for successfully modeling the carbon flow. The model results were in agreement with the measured process data, but were sensitive to even small changes in the fractionation factor of hydrogenotrophic methanogenesis. Our study demonstrates that stable carbon isotope signatures are a proper tool to quantify carbon flow, if fractionation factors are determined precisely. [source]


    Bilinear modelling of batch processes.

    JOURNAL OF CHEMOMETRICS, Issue 5 2008
    Part I: theoretical discussion
    Abstract When studying the principal component analysis (PCA) or partial least squares (PLS) modelling of batch process data, one realizes that there is a wide range of approaches. In many cases, new modelling approaches are presented just because they work properly for a particular application, for example, on-line monitoring and a given number of processes. A clear understanding of why these approaches perform successfully and which are the advantages and disadvantages in front of the others is seldom supplied. Why does modelling after batch-wise unfolding capture changing dynamics? What are the consequences of variable-wise unfolding? Is there any best unfolding method? When should several models for a single process be used? In this paper, it is shown how these and other related questions can be answered by properly analyzing the dynamic covariance structures of the various approaches. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A backoff strategy for model-based experiment design under parametric uncertainty

    AICHE JOURNAL, Issue 8 2010
    Federico Galvanin
    Abstract Model-based experiment design techniques are an effective tool for the rapid development and assessment of dynamic deterministic models, yielding the most informative process data to be used for the estimation of the process model parameters. A particular advantage of the model-based approach is that it permits the definition of a set of constraints on the experiment design variables and on the predicted responses. However, uncertainty in the model parameters can lead the constrained design procedure to predict experiments that turn out to be, in practice, suboptimal, thus decreasing the effectiveness of the experiment design session. Additionally, in the presence of parametric mismatch, the feasibility constraints may well turn out to be violated when that optimally designed experiment is performed, leading in the best case to less informative data sets or, in the worst case, to an infeasible or unsafe experiment. In this article, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure both feasibility and optimality of the planned experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session to keep the system within a feasible region with specified probability. This approach is particularly useful when designing optimal experiments starting from limited preliminary knowledge of the parameter set, with great improvement in terms of design efficiency and flexibility of the overall iterative model development scheme. The effectiveness of the proposed methodology is demonstrated and discussed by simulation through two illustrative case studies concerning the parameter identification of physiological models related to diabetes and cancer care. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


    Improved Fourier transform for processes with initial cyclic-steady-state

    AICHE JOURNAL, Issue 6 2010
    Yu Jin Cheon
    Abstract A new process identification method is proposed to estimate the frequency responses of the process from the activated process input and output. It can extract many more frequency responses and guarantees better accuracy than the previous describing function analysis algorithm. In addition, the proposed method can be applied to the case that the initial part of the activated process data is periodic (cyclic-steady-state), which is not possible with any previous nonparametric identification methods using the modified Fourier transform or Fourier analysis. Furthermore, it can incorporate all the cases in which either the initial part is steady-state and the final part is cyclic-steady-state or both the initial and final parts are steady-state. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


    Nonstationary fault detection and diagnosis for multimode processes

    AICHE JOURNAL, Issue 1 2010
    Jialin Liu
    Abstract Fault isolation based on data-driven approaches usually assume the abnormal event data will be formed into a new operating region, measuring the differences between normal and faulty states to identify the faulty variables. In practice, operators intervene in processes when they are aware of abnormalities occurring. The process behavior is nonstationary, whereas the operators are trying to bring it back to normal states. Therefore, the faulty variables have to be located in the first place when the process leaves its normal operating regions. For an industrial process, multiple normal operations are common. On the basis of the assumption that the operating data follow a Gaussian distribution within an operating region, the Gaussian mixture model is employed to extract a series of operating modes from the historical process data. The local statistic T2 and its normalized contribution chart have been derived for detecting abnormalities early and isolating faulty variables in this article. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


    SMB chromatography design using profile advancement factors, miniplant data, and rate-based process simulation

    AICHE JOURNAL, Issue 11 2009
    Shawn D. Feist
    Abstract This article describes a systematic miniplant-based approach to rapid development of simulated moving bed (SMB) chromatography applications. The methodology involves analysis of single-column pulse tests to screen adsorbents and operating conditions and to determine initial values of profile advancement factors used to specify flow rates for an initial SMB miniplant experiment. A lumped-parameter linear driving force rate-based model is developed by fitting process data from a single miniplant run. The data are fit in a two-step procedure involving initial determination of effective adsorption isotherm constants as best-fit parameters with subsequent adjustment of calculated mass transfer coefficients to refine the data fit. The resulting simulation is used to guide further miniplant work and minimize experimental effort. The methodology is illustrated with miniplant data for a binary protein separation showing excellent agreement between model results and process data generated over a wide range of operating conditions. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


    Statistical process monitoring based on dissimilarity of process data

    AICHE JOURNAL, Issue 6 2002
    Manabu Kano
    Multivariate statistical process control (MSPC) has been widely used for monitoring chemical processes with highly correlated variables. In this work, a novel statistical process monitoring method is proposed based on the idea that a change of operating condition can be detected by monitoring a distribution of process data, which reflects the corresponding operating conditions. To quantitatively evaluate the difference between two data sets, a dissimilarity index is introduced. The monitoring performance of the proposed method, referred to as DISSIM, and that of the conventional MSPC method are compared with their applications to simulated data collected from a simple 2 × 2 process and the Tennessee Eastman process. The results clearly show that the monitoring performance of DISSIM, especially dynamic DISSIM, is considerably better than that of the conventional MSPC method when a time-window size is appropriately selected. [source]


    Product transfer between plants using historical process data

    AICHE JOURNAL, Issue 10 2000
    Christiane M. Jaeckle
    Based on the concepts laid out in an earlier article (Jaeckle and MacGregor, 1998), this paper defines the problem of moving the production of a particular product grade from a plant A to another plant B when both plants have already produced a similar range of grades. Since the two plants may differ in size, configurations, and so on, the process conditions required to produce any given product grade may be very different in the two plants. How historical process data on both plants may be utilized to assist in this problem is investigated. A multivariate latent variable method is proposed that uses data from both plants to predict process conditions for plant B for a grade previously produced only in plant A. The approach is illustrated by a simulation example. [source]


    Feasibility of Using Interactive Voice Response to Monitor Daily Drinking, Moods, and Relationship Processes on a Daily Basis in Alcoholic Couples

    ALCOHOLISM, Issue 3 2010
    James A. Cranford
    Background:, Daily process research on alcohol involvement has used paper-and-pencil and electronic data collection methods, but no studies have yet tested the feasibility of using Interactive Voice Response (IVR) technology to monitor drinking, affective, and social interactional processes among alcoholic (ALC) couples. This study tested the feasibility of using IVR with n = 54 ALC couples. Methods:, Participants were n = 54 couples (probands who met criteria for a past 1-year alcohol use disorder and their partners) recruited from a substance abuse treatment center and the local community. Probands and their partners reported on their daily drinking, marital interactions, and moods once a day for 14 consecutive days using an IVR system. Probands and partners were on average 43.4 and 43.0 years old, respectively. Results:, Participants completed a total of 1,418 out of a possible 1,512 diary days for an overall compliance rate of 93.8%. ALC probands completed an average of 13.3 (1.0) diary reports, and partners completed an average of 13.2 (1.0) diary reports. On average, daily IVR calls lasted 7.8 (3.0) minutes for ALC probands and 7.6 (3.0) minutes for partners. Compliance was significantly lower on weekend days (Fridays and Saturdays) compared to other weekdays for probands and spouses. Although today's intoxication predicted tomorrow's noncompliance for probands but not spouses, the strongest predictor of proband's compliance was their spouse's compliance. Daily anxiety and marital conflict were associated with daily IVR nonresponse, which triggered automated reminder calls. Conclusions:, Findings supported that IVR is a useful method for collecting daily drinking, mood, and relationship process data from alcoholic couples. Probands' compliance is strongly associated with their partners' compliance, and automated IVR calls may facilitate compliance on high anxiety, high conflict days. [source]


    Moment estimation for statistics from marked point processes

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2001
    Dimitris N. Politis
    In spatial statistics the data typically consist of measurements of some quantity at irregularly scattered locations; in other words, the data form a realization of a marked point process. In this paper, we formulate subsampling estimators of the moments of general statistics computed from marked point process data, and we establish their L2 -consistency. The variance estimator in particular can be used for the construction of confidence intervals for estimated parameters. A practical data-based method for choosing a subsampling parameter is given and illustrated on a data set. Finite sample simulation examples are also presented. [source]


    Online process mean estimation using L1 norm exponential smoothing

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2009
    Wei Jiang
    Abstract A basic assumption in process mean estimation is that all process data are clean. However, many sensor system measurements are often corrupted with outliers. Outliers are observations that do not follow the statistical distribution of the bulk of the data and consequently may lead to erroneous results with respect to statistical analysis and process control. Robust estimators of the current process mean are crucial to outlier detection, data cleaning, process monitoring, and other process features. This article proposes an outlier-resistant mean estimator based on the L1 norm exponential smoothing (L1 -ES) method. The L1 -ES statistic is essentially model-free and demonstrably superior to existing estimators. It has the following advantages: (1) it captures process dynamics (e.g., autocorrelation), (2) it is resistant to outliers, and (3) it is easy to implement. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009 [source]


    A Hybrid SPC Method with the Chi-Square Distance Monitoring Procedure for Large-scale, Complex Process Data

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2006
    Nong Ye
    Abstract Standard multivariate statistical process control (SPC) techniques, such as Hotelling's T2, cannot easily handle large-scale, complex process data and often fail to detect out-of-control anomalies for such data. We develop a computationally efficient and scalable Chi-Square () Distance Monitoring (CSDM) procedure for monitoring large-scale, complex process data to detect out-of-control anomalies, and test the performance of the CSDM procedure using various kinds of process data involving uncorrelated, correlated, auto-correlated, normally distributed, and non-normally distributed data variables. Based on advantages and disadvantages of the CSDM procedure in comparison with Hotelling's T2 for various kinds of process data, we design a hybrid SPC method with the CSDM procedure for monitoring large-scale, complex process data. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Process modeling and optimization of industrial ethylene oxide reactor by integrating support vector regression and genetic algorithm

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2009
    Sandip Kumar Lahiri
    Abstract This article presents an artificial intelligence-based process modeling and optimization strategies, namely support vector regression,genetic algorithm (SVR-GA) for modeling and optimization of catalytic industrial ethylene oxide (EO) reactor. In the SVR-GA approach, an SVR model is constructed for correlating process data comprising values of operating and performance variables. Next, model inputs describing process operating variables are optimized using Genetic Algorithm (GAs) with a view to maximize the process performance. The GA possesses certain unique advantages over the commonly used gradient-based deterministic optimization algorithms The SVR-GA is a new strategy for chemical process modeling and optimization. The major advantage of the strategies is that modeling and optimization can be conducted exclusively from the historic process data wherein the detailed knowledge of process phenomenology (reaction mechanism, kinetics, etc.) is not required. Using SVR-GA strategy, a number of sets of optimized operating conditions leading to maximized EO production and catalyst selectivity were obtained. The optimized solutions when verified in actual plant resulted in a significant improvement in the EO production rate and catalyst selectivity. On présente dans cet article des stratégies de modélisation et d'optimisation de procédés reposant sur l'intelligence artificielle, à savoir la méthode basée sur la régression des vecteurs de soutien et l'algorithme génétique (SVR-GA) pour la modélisation et l'optimisation du réacteur d'oxyde d'éthylène (EO) industriel catalytique. Dans la méthode SVR-GA, un modèle de régression des vecteurs de soutien est mis au point pour corréler les données de procédé comprenant les valeurs des variables de fonctionnement et de performance. Par la suite, les données d'entrée du modèle décrivant les variables de fonctionnement du procédé sont optimisées à l'aide de l'algorithme génétique (GA) dans l'optique de maximiser la performance du procédé. Le GA possède certains avantages uniques par rapport aux algorithmes d'optimisation déterministes basés sur les gradients communément utilisés. La SVR-GA est une nouvelle stratégie pour la modélisation et l'optimisation des procédés. Le principal avantage de ces stratégies est que la modélisation et l'optimisation peuvent être menées exclusivement à partir des données de procédés historiques, et il n'est pas nécessaire de connaître en détail la phénoménologie des procédés (mécanisme de réaction, cinétique, etc.). À l'aide de la stratégie SVR-GA, plusieurs séries de conditions opératoires optimisées conduisant à une production d'EO et une sélectivité de catalyseur maximisées ont été obtenues. Les solutions optimisées vérifiées en installations réelles permettent une amélioration significative du taux de production d'EO et de la sélectivité du catalyseur. [source]


    Multivariate Statistical Process Monitoring Using Kernel Density Estimation

    ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 1-2 2005
    J. Liang
    In this paper, a general kernel density estimator has been introduced and discussed for multivariate processes in order to provide enhanced real-time performance monitoring. The proposed approach is based upon the concept of kernel density function, which is more appropriate to the underlying probability distribution of industrial process data in the development of a real-time monitoring scheme, to overcome the limitations of the conventional approach of defining the normal operating region based upon the assumption of normality. An optimal bandwidth selection rule is given based on the so-called mean integrated squared error index, and that is the normal operating region of process calculated using the optimal kernel density estimator before new process data are projected onto the normal operating region. The results of a case study of an industrial reheating furnace clearly demonstrates the power and advantages (e.g. decreasing the number of false alarms, identifying abnormal behaviour earlier, and reducing data sparsity) of the kernel density estimator-based approach over the conventional approach under the assumption of normality, which is still widely used. [source]


    Piecewise analysis and modeling of circuit pack temperature cycling data

    BELL LABS TECHNICAL JOURNAL, Issue 3 2006
    Toby Joyce
    Temperature cycling environmental stress testing (EST) of circuit packs is a standard test procedure for the precipitation of latent defects in order to minimize early product lifecycle customer returns. EST is an expensive, energy-intensive bottleneck in the manufacturing process, one that is based on empiricisms that may be out of date. This presents great opportunity for optimization and test cost reduction. This paper describes the characterization of temperature cycling through analysis and modeling of process data in order to optimize the test parameters,ramp rate, temperature extremes, dwell times, and number of cycles. Failure data from circuit packs tested at a Lucent facility is analyzed using a regression technique and graphical inspection. The dwell and ramp periods of the test are considered in a piecewise manner. A cost model is applied based on distributions fitted to the failure data. The analysis yields a methodology for the dynamic, value-based optimization of temperature cycling EST. © 2006 Lucent Technologies Inc. [source]


    Marginal Mark Regression Analysis of Recurrent Marked Point Process Data

    BIOMETRICS, Issue 2 2009
    Benjamin French
    Summary Longitudinal studies typically collect information on the timing of key clinical events and on specific characteristics that describe those events. Random variables that measure qualitative or quantitative aspects associated with the occurrence of an event are known as marks. Recurrent marked point process data consist of possibly recurrent events, with the mark (and possibly exposure) measured if and only if an event occurs. Analysis choices depend on which aspect of the data is of primary scientific interest. First, factors that influence the occurrence or timing of the event may be characterized using recurrent event analysis methods. Second, if there is more than one event per subject, then the association between exposure and the mark may be quantified using repeated measures regression methods. We detail assumptions required of any time-dependent exposure process and the event time process to ensure that linear or generalized linear mixed models and generalized estimating equations provide valid estimates. We provide theoretical and empirical evidence that if these conditions are not satisfied, then an independence estimating equation should be used for consistent estimation of association. We conclude with the recommendation that analysts carefully explore both the exposure and event time processes prior to implementing a repeated measures analysis of recurrent marked point process data. [source]


    Multifrequency permittivity measurements enable on-line monitoring of changes in intracellular conductivity due to nutrient limitations during batch cultivations of CHO cells

    BIOTECHNOLOGY PROGRESS, Issue 1 2010
    Sven Ansorge
    Abstract Lab and pilot scale batch cultivations of a CHO K1/dhfr, host cell line were conducted to evaluate on-line multifrequency permittivity measurements as a process monitoring tool. The ,-dispersion parameters such as the characteristic frequency (fC) and the permittivity increment (,,max) were calculated on-line from the permittivity spectra. The dual-frequency permittivity signal correlated well with the off-line measured biovolume and the viable cell density. A significant drop in permittivity was monitored at the transition from exponential growth to a phase with reduced growth rate. Although not reflected in off-line biovolume measurements, this decrease coincided with a drop in OUR and was probably caused by the depletion of glutamine and a metabolic shift occurring at the same time. Sudden changes in cell density, cell size, viability, capacitance per membrane area (CM), and effects caused by medium conductivity (,m) could be excluded as reasons for the decrease in permittivity. After analysis of the process data, a drop in fC as a result of a fall in intracellular conductivity (,i) was identified as responsible for the observed changes in the dual-frequency permittivity signal. It is hypothesized that the ,-dispersion parameter fC is indicative of changes in nutrient availability that have an impact on intracellular conductivity ,i. On-line permittivity measurements consequently not only reflect the biovolume but also the physiological state of mammalian cell cultures. These findings should pave the way for a better understanding of the intracellular state of cells and render permittivity measurements an important tool in process development and control. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source]


    An Adaptive Recipe Implementation in Case-Based Formalism for Abnormal Condition Management

    CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 12 2005
    D. Rizal
    Abstract This paper deals with accurate recipe implementation for abnormal condition management in a batch process using a case-based reasoning (CBR) approach. A set of new problems can be solved by reusing proven process solutions. The proposed system integrates quantitative and qualitative parameters for adaptation of cases. A novel methodology to generate accurate recipes and to adapt to the processes is introduced during normal and abnormal conditions. In particular, the differences between current conditions and the references (recipes) should be managed to prevent any hazardous conditions arising. The processes are evaluated using their similarity to the past cases. This intelligent approach distinguishes plausible cases, generates accurate recipes, and adapts to new situations. The aim is to use the offline historical process data and safety related information in order to propose changes and adjustments in the processes. [source]


    On the development and application of a self,organizing feature map,based patent map

    R & D MANAGEMENT, Issue 4 2002
    Byung, Un Yoon
    Recently, the range of R&D management has expanded to include management of technological assets such as technology information, product/process data, and patents. Among others, patent map (PM) has been paid increasing attention by both practitioners and researchers alike in R&D management. However, the limitation of conventional PM has been recognized, as the size of patent database becomes voluminous and the relationship among attributes becomes complex. Thus, more sophisticated data,mining tools are required to make full use of potential information from patent databases. In this paper, we propose an exploratory process of developing a self,organizing feature map (SOFM),based PM that visualizes the complex relationship among patents and the dynamic pattern of technological advancement. The utility of SOFM, vis,à,vis other tools, is highlighted as the size and complexity of the database increase since it can reduce the amount of data by clustering and visualize the reduced data onto a lower,dimensional display simultaneously. Specifically, three types of PM, technology vacuum map, claim point map, technology portfolio map, are suggested. The proposed maps may be used in monitoring technological change, developing new products, and managing intellectual property. [source]