Simplifying Assumptions (simplifying + assumption)

Distribution by Scientific Domains


Selected Abstracts


Alternative Forms of Mixing Banking with Commerce: Evidence from American History

FINANCIAL MARKETS, INSTITUTIONS & INSTRUMENTS, Issue 2 2003
Joseph G. Haubrich
Much of the discussion about banking and commerce in America has failed to make several crucial distinctions and has not accounted for many arrangements that have promoted the mixing of these activities. We investigate the history of banking and commerce in the United States, looking both at bank control of commercial firms and commercial firms' control of banks. We trace how these controls have changed with shifting definitions of "bank" and changing methods of "control." Despite the regulations prohibiting some arrangements that promote financial control, we find evidence of extensive linkages between banking and commerce in the United States. These linkages usually build on devices that are very close substitutes to the arrangements prohibited by law. Altogether, our findings question the often made claim that traditionally banking in the United States has been separated from commerce. Furthermore, given that research on Japan and Germany has shown that the mixing of banking and commerce matters for a variety of issues, our evidence also raises some questions on similar research in the United States which makes the simplifying assumption that these industries are separated. [source]


Haplotype analysis in the presence of informatively missing genotype data

GENETIC EPIDEMIOLOGY, Issue 4 2006
Nianjun Liu
Abstract It is common to have missing genotypes in practical genetic studies, but the exact underlying missing data mechanism is generally unknown to the investigators. Although some statistical methods can handle missing data, they usually assume that genotypes are missing at random, that is, at a given marker, different genotypes and different alleles are missing with the same probability. These include those methods on haplotype frequency estimation and haplotype association analysis. However, it is likely that this simple assumption does not hold in practice, yet few studies to date have examined the magnitude of the effects when this simplifying assumption is violated. In this study, we demonstrate that the violation of this assumption may lead to serious bias in haplotype frequency estimates, and haplotype association analysis based on this assumption can induce both false-positive and false-negative evidence of association. To address this limitation in the current methods, we propose a general missing data model to characterize missing data patterns across a set of two or more markers simultaneously. We prove that haplotype frequencies and missing data probabilities are identifiable if and only if there is linkage disequilibrium between these markers under our general missing data model. Simulation studies on the analysis of haplotypes consisting of two single nucleotide polymorphisms illustrate that our proposed model can reduce the bias both for haplotype frequency estimates and association analysis due to incorrect assumption on the missing data mechanism. Finally, we illustrate the utilities of our method through its application to a real data set. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


A constitutive model for bonded geomaterials subject to mechanical and/or chemical degradation

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2003
R. Nova
Abstract The mechanical behaviour of bonded geomaterials is described by means of an elastoplastic strain-hardening model. The internal variables, taking into account the ,history' of the material, depend on the plastic strains experienced and on a conveniently defined scalar measure of damage induced by weathering and/or chemical degradation. For the sake of simplicity, it is assumed that only internal variables are affected by mechanical and chemical history of the material. Despite this simplifying assumption, it can be shown that many interesting phenomena exhibited by weathered bonded geomaterials can be successfully described. For instance, (i) the transition from brittle to ductile behaviour with increasing pressure of a calcarenite with collapsing internal structure, (ii) the complex behaviour of chalk and other calcareous materials in oedometric tests, (iii) the chemically induced variation of the stress and strain state of such kind of materials, are all phenomena that can be qualitatively reproduced. Several comparisons with experimental data show that the model can capture the observed behaviour also quantitatively. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Refined electrolyte-NRTL model: Activity coefficient expressions for application to multi-electrolyte systems,

AICHE JOURNAL, Issue 6 2008
G. M. Bollas
Abstract The influence of simplifying assumptions of the electrolyte-nonrandom two-liquid (NRTL) model in the derivation of activity coefficient expressions as applied to multi-electrolyte systems is critically examined. A rigorous and thermodynamically consistent formulation for the activity coefficients is developed, in which the simplifying assumption of holding ionic-charge fraction quantities constant in the derivation of activity coefficient expressions is removed. The refined activity coefficient formulation possesses stronger theoretical properties and practical superiority that is demonstrated through a case study representing the thermodynamic properties and speciation of dilute to concentrated aqueous sulfuric acid solutions at ambient conditions. In this case study phenomena, such as hydration, ion pairing, and partial dissociation are all taken into account. The overall result of this study is a consistent, analytically derived, short-range interaction contribution formulation for the electrolyte-NRTL activity coefficients and a very accurate representation of aqueous sulfuric acid solutions at ambient conditions at concentrations up to 50 molal. © 2008 American Institute of Chemical Engineers AIChE J, 2008 [source]


SUPRACENTER: Locating fireball terminal bursts in the atmosphere using seismic arrivals

METEORITICS & PLANETARY SCIENCE, Issue 9 2004
W. N. EDWARDS
A computer program, SUPRACENTER, calculates travel times by ray tracing through realistic atmospheres (that include winds) and locates source positions by minimization of travel time residuals. This is analogous to earthquake hypocenter location in the solid Earth but is done through a variably moving medium. Inclusion of realistic atmospheric ray tracing has removed the need for the simplifying assumption of an isotropic atmosphere or an approximation to account for "wind drift." This "drift" is on the order of several km when strong, unidirectional winds are present in the atmosphere at the time of a fireball's occurrence. SUPRACENTER-derived locations of three seismically recorded fireballs: 1) the October 9, 1997 El Paso superbolide; 2) the January 25, 1989 Mt. Adams fireball; and 3) the May 6, 2000 Morávka fireball (with its associated meteorite fall), are consistent with (and, probably, an improvement upon) the locations derived from eyewitness, photographic, and video observations from the respective individual events. If direct acoustic seismic arrivals can be quickly identified for a fireball event, terminal burst locations (and, potentially, trajectory geometry and velocity information) can be quickly derived, aiding any meteorite recovery efforts during the early days after the fall. Potentially, seismic records may yield enough trajectory information to assist in the derivation of orbits for entering projectiles. [source]


What really happens with the electron gas in the famous Franck-Hertz experiment?

CONTRIBUTIONS TO PLASMA PHYSICS, Issue 3-4 2003
F. Sigeneger
Abstract The interpretation of the anode current characteristics obtained in the famous Franck-Hertz experiment of 1914 led to the verification of Bohr's predictions of quantised atomic states. This fundamental experiment has been often repeated, and nowadays is generally part of the curriculum in modern physics education. However, the interpretation of the experiment is typically based upon significant simplifying assumptions, some quite unrealistic. This is the case especially in relation to the kinetics of the electron gas, which is in reality quite complex, due mainly to non-uniformities in the electric field, caused by a combination of accelerating and retarding components. This non-uniformity leads to a potential energy valley in which the electrons are trapped. The present state of understanding of such effects, and their influence upon the anode characteristics, is quite unsatisfactory. In this article a rigorous study of a cylindrical Franck-Hertz experiment is presented, using mercury vapour, the aim being to reveal and explain what really happens with the electrons under realistic experimental conditions. In particular, the anode current characteristics are investigated over a range of mercury vapour pressures appropriate to the experiment to clearly elaborate the effects of elastic collisions (ignored in typical discussions) on the power budget, and the trapping of electrons in the potential energy valley. [source]


Performance Evaluation of the KEOPS Wavelength Routing Optical Packet Switch

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2000
Philippe Cadro
This paper presents results concerning the performance evaluation of the KEOPS wavelength routing optical transparent packetswitch. This switch solves contention using optical delay lines; these delay lines are grouped in several sets, in the first stage of the switch. Each input port has access to a few of these delay lines, and each set of delay lines has access to each output port. Non-FIFO output buffers are thus emulated using scheduling on a small number of delay lines with non-consecutive delays. Under simplifying assumptions, analytical models are derived, and checked by simulation. These models provide efficient bounds for estimating packet loss probability, under the assumption of regular, balanced input traffic. It is shown that the proposed switch architecture achieves a good performance in terms of packet loss, with a number of delay lines significantly smaller than the ones currently used in other architectures. [source]


Carbon dioxide generation calorimetry,Errors induced by the simplifying assumptions in the standard test methods

FIRE AND MATERIALS, Issue 2 2009
S. Brohez
Abstract Carbon dioxide generation (CDG) calorimetry is commonly used for measuring heat release rates of materials. Calorimetric equation provided in the ASTM E 2058 and the NFPA code 287 is a simplified equation since the water content in the ambient air and the fumes as well as the expansion factor of the combustion gases are neglected. This paper provides general equation for CDG calorimetry based on the Tewarson formulation. A comparison is proposed between the Standard test methods (simplified equation) and the general one. It is shown that the errors induced by the simplifying assumptions are negligible for oxygen depletion factor values commonly encountered in the Fire Propagation Apparatus (where large dilution factors of the combustion gases are used before the measurements of species concentrations). Copyright © 2009 John Wiley & Sons, Ltd. [source]


Modelling snowpack surface temperature in the Canadian Prairies using simplified heat flow models

HYDROLOGICAL PROCESSES, Issue 18 2005
Purushottam Raj Singh
Abstract Three practical schemes for computing the snow surface temperature Ts, i.e. the force,restore method (FRM), the surface conductance method (SCM), and the Kondo and Yamazaki method (KYM), were assessed with respect to Ts retrieved from cloud-free, NOAA-AVHRR satellite data for three land-cover types of the Paddle River basin of central Alberta. In terms of R2, the mean Ts, the t -test and F -test, the FRM generally simulated more accurate Ts than the SCM and KYM. The bias in simulated Ts is usually within several degrees Celsius of the NOAA-AVHRR Ts for both the calibration and validation periods, but larger errors are encountered occasionally, especially when Ts is substantially above 0 °C. Results show that the simulated Ts of the FRM is more consistent than that of the SCM, which in turn was more consistent than that of the KYM. This is partly because the FRM considers two aspects of heat conduction into snow, a stationary-mean diurnal (sinusoidal) temperature variation at the surface coupled to a near steady-state ground heat flux, whereas the SCM assumes a near steady-state, simple heat conduction, and other simplifying assumptions, and the KYM does not balance the snowpack heat fluxes by assuming the snowpack having a vertical temperature profile that is linear. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A constitutive model for the dynamic and high-pressure behaviour of a propellant-like material: Part I: Experimental background and general structure of the model

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 6 2001
Hervé Trumel
Abstract This paper is the first part of a work that aims at developing a mechanical model for the behaviour of propellant-like materials under high confining pressure and strain rate. The behaviour of a typical material is investigated experimentally. Several microstructural deformation processes are identified and correlated with loading conditions. The resulting behaviour is complex, non-linear, and characterized by multiple couplings. The general structure of a relevant model is sought using a thermodynamic framework. A viscoelastic-viscoplastic-compaction model structure is derived under suitable simplifying assumptions, in the framework of finite, though moderate, strains. Model development, identification and numerical applications are given in the companion paper. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Modelling Overdispersion for Complex Survey Data

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2001
E.A. Molina
Summary The population characteristics observed by selecting a complex sample from a finite identified population are the result of at least two processes: the process which generates the values attached to the units in the finite population, and the process of selecting the sample of units from the population. In this paper we propose that the resulting observations by viewed as the joint realization of both processes. We overcome the inherent difflculty in modelling the joint processes of generation and selection by exploring second moment and other simplifying assumptions. We obtain general expressions for the mean and covariance function of the joint processes and show that several overdispersion models discussed in the literature for the analysis of complex surveys are a direct consequence of our formulation, undere particular sampling schemes and population structures. Résumé Les caracté d'une population sont observées grâce à un échantillon complexe sélectionnéâ partir d'une poplation finie. Ces caractéristiques sont le résultat de l'échantillon des unités de ette population. Dans cet article, nous considérons que l'observation globale peut être vue comme une réalisation simultanée de ces deux processus. Nous tentons de surmonter la difficulté intrinsèque liée à la modélisation du double processus de génération et de sélection par une étude du moment d'ordre deux et en considérant d'autres hypothèses simplificatrices. Nous obtenons une expression générale pour la moyenne et la covariance liée au sondage complexes, sont une conséquence directe de notre formulation, losque l'on considère un plan de sondage particulier et une population ayant une structure spécifique. [source]


Emergency service systems: The use of the hypercube queueing model in the solution of probabilistic location problems

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 5 2008
Roberto D. Galvão
Abstract Probabilistic location problems are surveyed from the perspective of their use in the design of emergency service systems, with special emphasis on emergency medical systems. Pioneering probabilistic models were defined in the 1980s, as a natural extension of deterministic covering models (first generation models) and backup models (second generation). These probabilistic models, however, adopted simplifying assumptions that in many cases do not correspond to real-world situations, where servers usually cooperate and have specific individual workloads. Thus the idea of embedding the hypercube queueing model into these formulations is to make them more adherent to the real world. The hypercube model and its extensions are initially presented in some detail, which is followed by a brief review of exact and approximate methods for its solution. Probabilistic models for the design of emergency service systems are then reviewed. The pioneering models of Daskin and ReVelle and Hogan are extended by embedding the hypercube model into them. Solution methods for these models are surveyed next, with comments on specialized models for the design of emergency medical systems for urban areas and highways. [source]


Analyzing the Trade-off Between Investing in Service Channels and Satisfying the Targeted User Service for Brazilian Internet Service Providers

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2002
Gisele C. Fontanella
The computer connection to the Internet is provided by firms known as internet service providers (ISPs). The simplest mode of physical connection is when the user connects to an ISP's service channel by an ordinary telephone line (dial-up). Finding an available channel may not be an easy task, especially during the peak hours of many Brazilian ISPs. This results in a problem for the ISPs, which is how to achieve the most appropriate trade-off between investing in capacity and satisfying the target user service level. This paper analyzes this trade-off based on a three-step approach: (i) determine user arrival and service processes in chosen periods, (ii) select an appropriate queueing model using some simplifying assumptions, and (iii) generate trade-off curves between system performance measures. To illustrate the application of this approach, some results derived from a case study performed at an ISP in Sao Paulo state are given. [source]


Multiparameter models for performance analysis of UASB reactors

JOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 8 2008
C M Narayanan
Abstract BACKGROUND: UASB (upflow anaerobic sludge blanket) bioreactors have the distinct advantage that they do not demand support particles and provide a high rate of bioconversion even with high strength feedstocks. Although apparently simple in construction, the performance analysis of these reactors involves a high degree of mathematical complexity. Most simulation models reported in the literature are rudimentary in nature as they involve gross approximations. In the present paper, two multiparameter simulation packages are presented that make no simplifying assumptions and hence are more rigorous in nature. RESULTS: The first package assumes the sludge bed to be a plug-flow reactor (PFR) and the sludge blanket to be an ideal continuous stirred tank reactor (CSTR). The second package equates the reactor to a plug flow dispersion reactor (PFDR), the axial dispersion coefficient however being a function of axial distance. The three phase nature of the sludge blanket has been considered and the variation of gas velocity in the axial direction has been taken into account. Three different kinetic equations have been considered. Resistance to diffusion of substrate into sludge granules has been accounted for by incorporating appropriately defined effectiveness factors. The applicability of simulation packages developed has been ascertained by comparing with real-life data collected from industrial/pilot plant/laboratory UASB reactors. The maximum deviation observed is ± 15%. CONCLUSIONS: Although the software packages developed have high computational load, their applicability has been successfully ascertained and they may be recommended for design and installation of industrial UASB reactors and also for the rating of existing installations. Copyright © 2008 Society of Chemical Industry [source]


Life-Cycle Assessment and Temporal Distributions of Emissions: Developing a Fleet-Based Analysis

JOURNAL OF INDUSTRIAL ECOLOGY, Issue 2 2000
Frank Field
Summary Although the product-centered focus of life-cycle assessment has been one of its strengths, this analytical perspective embeds assumptions that may conflict with the realities of environmental problems. This article demonstrates, through a series of mathematical derivations, that all the products in use, rather than a single product, frequently should be the appropriate unit of analysis. Such a "fleet-centered" approach supplies a richer perspective on the comparative emissions burdens generated by alternative products, and it eliminates certain simplifying assumptions imposed upon the analysis by a product-centered approach. A sample numerical case, examining the comparative emissions of steel-intensive and aluminum-intensive automobiles, is presented to contrast the results of the two approaches. The fleet-centered analysis shows that the "crossover time" (i.e., the time required before the fuel economy benefits of the lighter aluminum vehicle offset the energy intensity of the processes used to manufacture the aluminum in the first place) can be dramatically longer than that predicted by a product-centered life-cycle assessment. The fleet-centered perspective explicitly introduces the notion of time as a critical element of comparative life-cycle assessments and raises important questions about the role of the analyst in selecting the appropriate time horizon for analysis. Moreover, with the introduction of time as an appropriate dimension to life-cycle assessment, the influences of effects distributed over time can be more naturally and consistently treated. [source]


Transient Behavior and Gelation of Free Radical Polymerizations in Continuous Stirred Tank Reactors

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 4 2005
Rolando C. S. Dias
Abstract Summary: Using the authors' previously developed method for the general kinetic analysis of non-linear irreversible polymerizations, the simulation of free radical homogeneous polymerization systems with terminal branching and chain transfer to polymer has been carried out for continuous stirred tank reactors. Its improved accuracy on the numerical evaluation of generating functions has been exploited in order to perform their numerical inversion and chain length distributions could also be estimated with or without the presence of gel. A comparison with alternative techniques emphasizing the effect of their simplifying assumptions on the accuracy of calculations is also presented. Predicted CLD before gelation (t,=,1 h), after gelation (t,=,15 h, steady state), and close to gel point for a free radical polymerization with transfer to polymer in a CSTR with ,,=,60 min. [source]


A Mathematical Model for Photopolymerization From a Stationary Laser Light Source

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 1 2005
Michael F. Perry
Abstract Summary: A mathematical model of photopolymerization is presented for a stationary laser. Termination by radical combination and radical trapping is considered. Using simplifying assumptions, we derive analytical equations for the concentration of photoinitiator and monomer in the system. With these equations, we show that the light intensity and the initial amount of photoinitiator highly influence the polymerization process and determine the shape of the polymer that is formed. We also provide an analytic expression to determine the amount of polymer formed during dark reactions. Percent conversion of monomer as a function of time at z,=,0 and r,=,0 (Data from Table 1). [source]


The statistics of the highest E value

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 4 2007
Grzegorz Chojnowski
In a previous publication, the Gumbel,Fisher,Tippett (GFT) extreme-value analysis has been applied to investigate the statistics of the intensity of the strongest reflection in a thin resolution shell. Here, a similar approach is applied to study the distribution, expectation value and standard deviation of the highest normalized structure-factor amplitude (E value). As before, acentric and centric reflections are treated separately, a random arrangement of scattering atoms is assumed, and E -value correlations are neglected. Under these assumptions, it is deduced that the highest E value is GFT distributed to a good approximation. Moreover, it is shown that the root of the expectation value of the highest `normalized' intensity is not only an upper limit for the expectation value of the highest E value but also a very good estimate. Qualitatively, this can be attributed to the sharpness of the distribution of the highest E value. Although the formulas were derived with various simplifying assumptions and approximations, they turn out to be useful also for real small-molecule and protein crystal structures, for both thin and thick resolution shells. The only limitation is that low-resolution data (below 2.5,Å) have to be excluded from the analysis. These results have implications for the identification of outliers in experimental diffraction data. [source]


Programmed motion in the presence of homogeneity

ASTRONOMISCHE NACHRICHTEN, Issue 8 2009
G. Bozis
Abstract In the framework of the inverse problem of dynamics, we face the following question with reference to the motion of one material point: Given a region Torb of the xy plane, described by the inequality g (x, y) , c0, are there potentials V = V (x, y) which can produce monoparametric families of orbits f (x, y) = c (also to be found) lying exclusively in the region Torb? As the relevant PDEs are nonlinear, an answer to this question (generally affirmative, but not with assurance) can be given by the procedure of the determination of certain constants specifying the pertinent functions. In this paper we ease the mathematics involved by making certain simplifying assumptions referring to the homogeneity of both the function g (x, y) (describing the boundary of Torb) and of the slope function ,(x, y) = fy/fx (representing the required family f (x, y) = c). We develop the method to treat the so formulated problem and we show that, even under these restrictive assumptions, an affirmative answer is guaranteed provided that two algebraic equations have in common at least one solution (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Sex-specific sibling interactions and offspring fitness in vertebrates: patterns and implications for maternal sex ratios

BIOLOGICAL REVIEWS, Issue 2 2006
Tobias Uller
ABSTRACT Vertebrate sex ratios are notorious for their lack of fit to theoretical models, both with respect to the direction and the magnitude of the sex ratio adjustment. The reasons for this are likely to be linked to simplifying assumptions regarding vertebrate life histories. More specifically, if the sex ratio adjustment itself influences offspring fitness, due to sex-specific interactions among offspring, this could affect optimal sex ratios. A review of the literature suggests that sex-specific sibling interactions in vertebrates result from three major causes: (i) sex asymmetries in competitive ability, for example due to sexual dimorphism, (ii) sex-specific cooperation or helping, and (iii) sex asymmetries in non-competitive interactions, for example steroid leakage between fetuses. Incorporating sex-specific sibling interactions into a sex ratio model shows that they will affect maternal sex ratio strategies and, under some conditions, can repress other selection pressures for sex ratio adjustment. Furthermore, sex-specific interactions could also explain patterns of within-brood sex ratio (e.g. in relation to laying order). Failure to take sex-specific sibling interactions into account could partly explain the lack of sex ratio adjustment in accordance with theoretical expectations in vertebrates, and differences among taxa in sex-specific sibling interactions generate predictions for comparative and experimental studies. [source]


Statistical Tests for Clonality

BIOMETRICS, Issue 2 2007
Colin B. Begg
Summary Cancer investigators frequently conduct studies to examine tumor samples from pairs of apparently independent primary tumors with a view to determine whether they share a "clonal" origin. The genetic fingerprints of the tumors are compared using a panel of markers, often representing loss of heterozygosity (LOH) at distinct genetic loci. In this article we evaluate candidate significance tests for this purpose. The relevant information is derived from the observed correlation of the tumors with respect to the occurrence of LOH at individual loci, a phenomenon that can be evaluated using Fisher's exact test. Information is also available from the extent to which losses at the same locus occur on the same parental allele. Data from these combined sources of information can be evaluated using a simple adaptation of Fisher's exact test. The test statistic is the total number of loci at which concordant mutations occur on the same parental allele, with higher values providing more evidence in favor of a clonal origin for the two tumors. The test is shown to have high power for detecting clonality for plausible models of the alternative (clonal) hypothesis, and for reasonable numbers of informative loci, preferably located on distinct chromosomal arms. The method is illustrated using studies to identify clonality in contralateral breast cancer. Interpretation of the results of these tests requires caution due to simplifying assumptions regarding the possible variability in mutation probabilities between loci, and possible imbalances in the mutation probabilities between parental alleles. Nonetheless, we conclude that the method represents a simple, powerful strategy for distinguishing independent tumors from those of clonal origin. [source]


Development of a Segmented Model for a Continuous Electrophoretic Moving Bed Enantiomer Separation

BIOTECHNOLOGY PROGRESS, Issue 6 2003
Brian M. Thome
With the recent demonstration of a continuous electrophoretic "moving bed" enantiomer separation at mg/h throughputs, interest has now turned to scaling up the process for use as a benchtop pharmaceutical production tool. To scale the method, a steady-state mathematical model was developed that predicts the process response to changes in input feed rate and counterflow or "moving bed" velocities. The vortex-stabilized apparatus used for the separation was modeled using four regions based on the different hydrodynamic flows in each section. Concentration profiles were then derived on the basis of the properties of the Piperoxan-sulfated ,-cyclodextrin system being studied. The effects of different regional flow rates on the concentration profiles were evaluated and used to predict the maximum processing rate and the hydrodynamic profiles required for a separation. Although the model was able to qualitatively predict the shapes of the concentration profiles and show where the theoretical limits of operation existed, it was not able to quantitatively match the data from actual enantiomer separations to better than 50% accuracy. This is believed to be due to the simplifying assumptions involved, namely, the neglect of electric field variations and the lack of a competitive binding isotherm in the analysis. Although the model cannot accurately predict concentrations from a separation, it provides a good theoretical framework for analyzing how the process responds to changes in counterflow rate, feed rate, and the properties of the molecules being separated. [source]