Simple Algorithm (simple + algorithm)

Distribution by Scientific Domains


Selected Abstracts


Two-dimensional prediction of time dependent, turbulent flow around a square cylinder confined in a channel

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2010
M. Raisee
Abstract This paper presents two-dimensional and unsteady RANS computations of time dependent, periodic, turbulent flow around a square block. Two turbulence models are used: the Launder,Sharma low-Reynolds number k,, model and a non-linear extension sensitive to the anisotropy of turbulence. The Reynolds number based on the free stream velocity and obstacle side is Re=2.2×104. The present numerical results have been obtained using a finite volume code that solves the governing equations in a vertical plane, located at the lateral mid-point of the channel. The pressure field is obtained with the SIMPLE algorithm. A bounded version of the third-order QUICK scheme is used for the convective terms. Comparisons of the numerical results with the experimental data indicate that a preliminary steady solution of the governing equations using the linear k,, does not lead to correct flow field predictions in the wake region downstream of the square cylinder. Consequently, the time derivatives of dependent variables are included in the transport equations and are discretized using the second-order Crank,Nicolson scheme. The unsteady computations using the linear and non-linear k,, models significantly improve the velocity field predictions. However, the linear k,, shows a number of predictive deficiencies, even in unsteady flow computations, especially in the prediction of the turbulence field. The introduction of a non-linear k,, model brings the two-dimensional unsteady predictions of the time-averaged velocity and turbulence fields and also the predicted values of the global parameters such as the Strouhal number and the drag coefficient to close agreement with the data. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Performance analysis of IDEAL algorithm for three-dimensional incompressible fluid flow and heat transfer problems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2009
Dong-Liang Sun
Abstract Recently, an efficient segregated algorithm for incompressible fluid flow and heat transfer problems, called inner doubly iterative efficient algorithm for linked equations (IDEAL), has been proposed by the present authors. In the algorithm there exist inner doubly iterative processes for pressure equation at each iteration level, which almost completely overcome two approximations in SIMPLE algorithm. Thus, the coupling between velocity and pressure is fully guaranteed, greatly enhancing the convergence rate and stability of solution process. However, validations have only been conducted for two-dimensional cases. In the present paper the performance of the IDEAL algorithm for three-dimensional incompressible fluid flow and heat transfer problems is analyzed and a systemic comparison is made between the algorithm and three other most widely used algorithms (SIMPLER, SIMPLEC and PISO). By the comparison of five application examples, it is found that the IDEAL algorithm is the most robust and the most efficient one among the four algorithms compared. For the five three-dimensional cases studied, when each algorithm works at its own optimal under-relaxation factor, the IDEAL algorithm can reduce the computation time by 12.9,52.7% over SIMPLER algorithm, by 45.3,73.4% over SIMPLEC algorithm and by 10.7,53.1% over PISO algorithm. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Momentum/continuity coupling with large non-isotropic momentum source terms

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2009
J. D. Franklin
Abstract Pressure-based methods such as the SIMPLE algorithm are frequently used to determine a coupled solution between the component momentum equations and the continuity equation. This paper presents a colocated variable pressure correction algorithm for control volumes of polyhedral/polygonal cell topologies. The correction method is presented independent of spatial approximation. The presence of non-isotropic momentum source terms is included in the proposed algorithm to ensure its applicability to multi-physics applications such as gas and particulate flows. Two classic validation test cases are included along with a newly proposed test case specific to multiphase flows. The classic validation test cases demonstrate the application of the proposed algorithm on truly arbitrary polygonal/polyhedral cell meshes. A comparison between the current algorithm and commercially available software is made to demonstrate that the proposed algorithm is competitively efficient. The newly proposed test case demonstrates the benefits of the current algorithm when applied to a multiphase flow situation. The numerical results from this case show that the proposed algorithm is more robust than other methods previously proposed. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Numerical simulation of viscous flow interaction with an elastic membrane

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2008
Lisa A. Matthews
Abstract A numerical fluid,structure interaction model is developed for the analysis of viscous flow over elastic membrane structures. The Navier,Stokes equations are discretized on a moving body-fitted unstructured triangular grid using the finite volume method, taking into account grid non-orthogonality, and implementing the SIMPLE algorithm for pressure solution, power law implicit differencing and Rhie,Chow explicit mass flux interpolations. The membrane is discretized as a set of links that coincide with a subset of the fluid mesh edges. A new model is introduced to distribute local and global elastic effects to aid stability of the structure model and damping effects are also included. A pseudo-structural approach using a balance of mesh edge spring tensions and cell internal pressures controls the motion of fluid mesh nodes based on the displacements of the membrane. Following initial validation, the model is applied to the case of a two-dimensional membrane pinned at both ends at an angle of attack of 4° to the oncoming flow, at a Reynolds number based on the chord length of 4 × 103. A series of tests on membranes of different elastic stiffness investigates their unsteady movements over time. The membranes of higher elastic stiffness adopt a stable equilibrium shape, while the membrane of lowest elastic stiffness demonstrates unstable interactions between its inflated shape and the resulting unsteady wake. These unstable effects are shown to be significantly magnified by the flexible nature of the membrane compared with a rigid surface of the same average shape. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Using vorticity to define conditions at multiple open boundaries for simulating flow in a simplified vortex settling basin

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2007
A. N. Ziaei
Abstract In this paper a method is developed to define multiple open boundary (OB) conditions in a simplified vortex settling basin (VSB). In this method, the normal component of the momentum equation is solved at the OBs, and tangential components of vorticity are calculated by solving vorticity transport equations only at the OBs. Then the tangential vorticity components are used to construct Neumann boundary conditions for tangential velocity components. Pressure is set to its ambient value, and the divergence-free condition is satisfied at these boundaries by employing the divergence as the Neumann condition for the normal-direction momentum equation. The 3-D incompressible Navier,Stokes equations in a primitive-variable form are solved using the SIMPLE algorithm. Grid-function convergence tests are utilized to verify the numerical results. The complicated laminar flow structure in the VSB is investigated, and preliminary assessment of two popular turbulence models, k,, and k,,, is conducted. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Simulation and analysis of flow through microchannel

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2009
Madhusree Kundu
Abstract One-dimensional and two-dimensional models for microchannel flow with noncontinuum (slip flow) boundary conditions have been presented here. This study presents an efficient numerical procedure using pressure-correction-based iterative SIMPLE algorithm with QUICK scheme in convective terms to simulate a steady incompressible two-dimensional flow through a microchannel. In the present work, the slip flow of liquid through a microchannel has been modeled using a slip length assumption instead of using conventional Maxwell's slip flow model, which essentially utilizes the molecular mean free path concept. The models developed, following this approach, lend an insight into the physics of liquid flow through microchannels. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


A Simple Algorithm for Designing Group Sequential Clinical Trials

BIOMETRICS, Issue 3 2001
David A. Schoenfeld
Summary. This article describes a simple algorithm for calculating probabilities associated with group sequential trials. This allows the choice of boundaries that may not be among those implemented in available software. [source]


Function-based flow modeling and animation

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2001
Ergun Akleman
Abstract This paper summarizes a function-based approach to model and animate 2D and 3D flows. We use periodic functions to create cyclical animations that represent 2D and 3D flows. These periodic functions are constructed with an extremely simple algorithm from a set of oriented lines. The speed and orientation of the flow are described directly by the orientation and the lengths of these oriented lines. The resulting cyclical animations are then obtained by sampling the constructed periodic functions. Our approach is independent of dimension, i.e. for 2D and 3D flow the same types of periodic functions are used. Rendering images for 2D and 3D flows is slightly different. In 2D function values directly are mapped to color values. On the other hand, in 3D function values are first mapped to color and opacity and then the volume is rendered by our volume renderer. Modeled and animated flows are used to improve the visualization of operations of rolling piston and rotary vane compressors. Copyright © 2001 John Wiley & Sons, Ltd. [source]


The usage of a simplified self-titration dosing guideline (303 Algorithm) for insulin detemir in patients with type 2 diabetes , results of the randomized, controlled PREDICTIVEÔ 303 study

DIABETES OBESITY & METABOLISM, Issue 6 2007
L. Meneghini
The Predictable Results and Experience in Diabetes through Intensification and Control to Target: An International Variability Evaluation 303 (PREDICTIVEÔ 303) Study (n = 5604) evaluated the effectiveness of insulin detemir, a long-acting basal insulin analogue, using a simplified patient self-adjusted dosing algorithm (303 Algorithm group) compared with standard-of-care physician-driven adjustments (Standard-of-care group) in a predominantly primary care setting, over a period of 6 months. Insulin detemir was to be started once-daily as add-on therapy to any other glucose-lowering regimens or as a replacement of prestudy basal insulin in patients with type 2 diabetes. Investigator sites rather than individual patients were randomized to either the 303 Algorithm group or the Standard-of-care group. Patients from the 303 Algorithm group sites were instructed to adjust their insulin detemir dose every 3 days based on the mean of three ,adjusted' fasting plasma glucose (aFPG) values (capillary blood glucose calibrated to equivalent plasma glucose values) using a simple algorithm: mean aFPG < 80 mg/dl (<4.4 mmol/l), reduce dose by 3 U; aFPG between 80 and 110 mg/dl (4.4,6.1 mmol/l), no change; and aFPG > 110 mg/dl (>1.1 mmol/l), increase dose by 3 U. The insulin detemir dose for patients in the Standard-of-care group was adjusted by the investigator according to the standard of care. Mean A1C decreased from 8.5% at baseline to 7.9% at 26 weeks for the 303 Algorithm group and from 8.5 to 8.0% for the Standard-of-care group (p = 0.0106 for difference in A1C reduction between the two groups). Mean FPG values decreased from 175 mg/dl (9.7 mmol/l) at baseline to 141 mg/dl (7.8 mmol/l) for the 303 Algorithm group and decreased from 174 mg/dl (9.7 mmol/l) to 152 mg/dl (8.4 mmol/l) for the Standard-of-care group (p < 0.0001 for difference in FPG reduction between the two groups). Mean body weight remained the same at 26 weeks in both groups (change from baseline 0.1 and ,0.2 kg for the 303 Algorithm group and the Standard-of-care group respectively). At 26 weeks, 91% of the patients in the 303 Algorithm group and 85% of the patients in the Standard-of-care group remained on once-daily insulin detemir administration. The rates of overall hypoglycaemia (events/patient/year) decreased significantly from baseline in both groups [from 9.05 to 6.44 for the 303 Algorithm group (p = 0.0039) and from 9.53 to 4.95 for the Standard-of-care group (p < 0.0001)]. Major hypoglycaemic events were rare in both groups (0.26 events/patient/year for the 303 Algorithm group and 0.20 events/patient/year for the Standard-of-care group; p = 0.2395). In conclusion, patients in the 303 Algorithm group achieved comparable glycaemic control with higher rate of hypoglycaemia as compared with patients in the Standard-of-care group, possibly because of more aggressive insulin dose adjustments. The vast majority of the patients in both groups were effectively treated with once-daily insulin detemir therapy. The use of insulin detemir in this predominantly primary care setting achieved significant improvements in glycaemic control with minimal risk of hypoglycaemia and no weight gain. [source]


Spectrally based remote sensing of river bathymetry

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 8 2009
Carl J. Legleiter
Abstract This paper evaluates the potential for remote mapping of river bathymetry by (1) examining the theoretical basis of a simple, ratio-based technique for retrieving depth information from passive optical image data; (2) performing radiative transfer simulations to quantify the effects of suspended sediment concentration, bottom reflectance, and water surface state; (3) assessing the accuracy of spectrally based depth retrieval under field conditions via ground-based reflectance measurements; and (4) producing bathymetric maps for a pair of gravel-bed rivers from hyperspectral image data. Consideration of the relative magnitudes of various radiance components allowed us to define the range of conditions under which spectrally based depth retrieval is appropriate: the remotely sensed signal must be dominated by bottom-reflected radiance. We developed a simple algorithm, called optimal band ratio analysis (OBRA), for identifying pairs of wavelengths for which this critical assumption is valid and which yield strong, linear relationships between an image-derived quantity X and flow depth d. OBRA of simulated spectra indicated that water column optical properties were accounted for by a shorter-wavelength numerator band sensitive to scattering by suspended sediment while depth information was provided by a longer-wavelength denominator band subject to strong absorption by pure water. Field spectra suggested that bottom reflectance was fairly homogeneous, isolating the effect of depth, and that radiance measured above the water surface was primarily reflected from the bottom, not the water column. OBRA of these data, 28% of which were collected during a period of high turbidity, yielded strong X versus d relations (R2 from 0·792 to 0·976), demonstrating that accurate depth retrieval is feasible under field conditions. Moreover, application of OBRA to hyperspectral image data resulted in spatially coherent, hydraulically reasonable bathymetric maps, though negative depth estimates occurred along channel margins where pixels were mixed. This study indicates that passive optical remote sensing could become a viable tool for measuring river bathymetry. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Removal of DC power-line magnetic-field effects from airborne total magnetic-field measurements

GEOPHYSICAL PROSPECTING, Issue 3 2000
Mehran Gharibi
Power lines carrying DC current can strongly affect total magnetic-field measurements. A simple algorithm using Biot,Savart's law was made to remove magnetic-field components due to a DC power line from airborne total magnetic-field measurements in the Gävle area, Sweden. The power-line location was estimated from observed data and then split into short line segments. The magnetic-field components due to each segment were calculated and summed together to give the total magnetic effect due to the power line at each observation point. The corrected total magnetic field was calculated by subtracting the power-line magnetic-field vector, projected on to the direction of the main field, from the measured total field. The results show a successful removal of the power-line magnetic effect from the total magnetic-field measurements. However, an error in the estimation of the power-line location can result in a magnetic-field residual after correction. A non-linear median filtering was used to remove this residual when needed. [source]


Three-dimensional transient free-surface flow of viscous fluids inside cavities of arbitrary shape

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2003
Kyu-Tae Kim
Abstract The three-dimensional transient free-surface flow inside cavities of arbitrary shape is examined in this study. An adaptive (Lagrangian) boundary-element approach is proposed for the general three-dimensional simulation of confined free-surface flow of viscous incompressible fluids. The method is stable as it includes remeshing capabilities of the deforming free-surface, and thus can handle large deformations. A simple algorithm is developed for mesh refinement of the deforming free-surface mesh. Smooth transition between large and small elements is achieved without significant degradation of the aspect ratio of the elements in the mesh. The method is used to determine the flow field and free-surface evolution inside cubic, rectangular and cylindrical containers. These problems illustrate the transient nature of the flow during the mixing process. Surface tension effects are also explored. Copyright © 2003 John Wiley & Sons, Ltd. [source]


On the extension of commercial planar circuit CAD packages to the analysis of two-port waveguide components

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 2 2003
Marco Farina
Abstract We introduce a simple algorithm that enables several commercial codes, based on the method of moments (MoM) and developed for the analysis of planar structures, to address the analysis of a class of two-port waveguide components. Its key step is a numerical calibration that can be easily automated. © 2003 Wiley Periodicals, Inc. Int J RF and Microwave CAE 13: 113,117, 2003. [source]


Collagenolytic (necrobiotic) granulomas: part II , the ,red' granulomas

JOURNAL OF CUTANEOUS PATHOLOGY, Issue 6 2004
Jane M. Lynch
The altered fibers lose their distinct boundaries and exhibit new staining patterns, becoming either more basophilic or eosinophilic. Within the area of altered collagen, there may be deposition of acellular substances such as mucin (blue) or fibrin (red), or there may be neutrophils with nuclear dust (blue), eosinophils (red), or flame figures (red). These color distinctions can be used as a simple algorithm for the diagnosis of collagenolytic granulomas, i.e. ,blue' granulomas vs. ,red' granulomas. Eight diagnoses are included within these two groupings, which are discussed in this two-part article. In the previously published first part, the clinical presentation, pathogenesis and histologic features of the ,blue' collagenolytic granulomas were discussed. These are the lesions of granuloma annulare, Wegener's granulomatosis, and rheumatoid vasculitis. In this second half of the series, the ,red' collagenolytic granulomas are discussed; these are the lesions of necrobiosis lipoidica, necrobiotic xanthogranuloma, rheumatoid nodules, Churg,Strauss syndrome, and eosinophilic cellulitis (Well's Syndrome). [source]


Collagenolytic (necrobiotic) granulomas: part 1 , the ,blue' granulomas

JOURNAL OF CUTANEOUS PATHOLOGY, Issue 5 2004
Jane M. Lynch
A collagenolytic or necrobiotic non-infectious granuloma is one in which a granulomatous infiltrate develops around a central area of altered collagen and elastic fibers. The altered fibers lose their distinct boundaries and exhibit new staining patterns, becoming either more basophilic or eosinophilic. Within the area of altered collagen, there may be deposition of acellular substances such as mucin (blue) or fibrin (red), or there may be neutrophils with nuclear dust (blue), eosinophils (red), or flame figures (red). These color distinctions can be used as a simple algorithm for the diagnosis of collagenolytic granulomas, i.e. ,blue' granulomas vs. ,red' granulomas. Eight diagnoses are included within these two groupings, which are discussed in this two-part article. In this first part, the clinical presentation, pathogenesis, and histologic features of the ,blue' collagenolytic granulomas are discussed. These are the lesions of granuloma annulare, Wegener's granulomatosis, and rheumatoid vasculitis. In the subsequent half of this two-part series, the ,red' collagenolytic granulomas will be discussed; these are the lesions of necrobiosis lipoidica, necrobiotic xanthogranuloma, rheumatoid nodules, Churg,Strauss syndrome, and eosinophilic cellulitis (Well's syndrome). [source]


An algorithm for thorough background subtraction from high-resolution LC/MS data: application to the detection of troglitazone metabolites in rat plasma, bile, and urine

JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 9 2008
Haiying Zhang
Abstract Interferences from biological matrices remain a major challenge to the in vivo detection of drug metabolites. For the last few decades, predicted metabolite masses and fragmentation patterns have been employed to aid in the detection of drug metabolites in liquid chromatography/mass spectrometry (LC/MS) data. Here we report the application of an accurate mass-based background-subtraction approach for comprehensive detection of metabolites formed in vivo using troglitazone as an example. A novel algorithm was applied to check all ions in the spectra of control scans within a specified time window around an analyte scan for potential background subtraction from that analyte spectrum. In this way, chromatographic fluctuations between control and analyte samples were dealt with, and background and matrix-related signals could be effectively subtracted from the data of the analyte sample. Using this algorithm with a ± 1.0 min control scan time window, a ± 10 ppm mass error tolerance, and respective predose samples as controls, troglitazone metabolites were reliably identified in rat plasma and bile samples. Identified metabolites included those reported in the literature as well as some that had not previously been reported, including a novel sulfate conjugate in bile. In combination with mass defect filtering, this algorithm also allowed for identification of troglitazone metabolites in rat urine samples. With a generic data acquisition method and a simple algorithm that requires no presumptions of metabolite masses or fragmentation patterns, this high-resolution LC/MS-based background-subtraction approach provides an efficient alternative for comprehensive metabolite identification in complex biological matrices. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Additive Outlier Detection Via Extreme-Value Theory

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2006
Peter Burridge
Abstract., This article is concerned with detecting additive outliers using extreme value methods. The test recently proposed for use with possibly non-stationary time series by Perron and Rodriguez [Journal of Time Series Analysis (2003) vol. 24, pp. 193,220], is, as they point out, extremely sensitive to departures from their assumption of Gaussianity, even asymptotically. As an alternative, we investigate the robustness to distributional form of a test based on weighted spacings of the sample order statistics. Difficulties arising from uncertainty about the number of potential outliers are discussed, and a simple algorithm requiring minimal distributional assumptions is proposed and its performance evaluated. The new algorithm has dramatically lower level-inflation in face of departures from Gaussianity than the Perron,Rodriguez test, yet retains good power in the presence of outliers. [source]


Use of the ,-function to estimate the skewness of species responses

JOURNAL OF VEGETATION SCIENCE, Issue 6 2003
Branko Karad
Abstract. Response of a species to an environmental variable may be modeled and predicted using a wide spectrum of different functions. Contrary to other functions (Gaussian, polynomial etc), all parameters of the ,-function are interpretable in ecological terms. However, computational difficulties in the determination of the ,-function parameters initiated controversial debates on the applicability and usefulness of this function in vegetation modelling and gradient analysis. We propose a simple algorithm for fitting the ,-function to observed data. Analytic properties of the algorithm (its ability to recover the known species responses along gradients) are tested using a series of simulated data. In most cases the algorithm correctly estimated parameters of the simulated responses. [source]


Anisotropy in high angular resolution diffusion-weighted MRI ,

MAGNETIC RESONANCE IN MEDICINE, Issue 6 2001
Lawrence R. Frank
Abstract The diffusion in voxels with multidirectional fibers can be quite complicated and not necessarily well characterized by the standard diffusion tensor model. High angular resolution diffusion-weighted acquisitions have recently been proposed as a method to investigate such voxels, but the reconstruction methods proposed require sophisticated estimation schemes. We present here a simple algorithm for the identification of diffusion anisotropy based upon the variance of the estimated apparent diffusion coefficient (ADC) as a function of measurement direction. The rationale for this method is discussed, and results in normal human subjects acquired with a novel diffusion-weighted stimulated-echo spiral acquisition are presented which distinguish areas of anisotropy that are not apparent in the relative anisotropy maps derived from the standard diffusion tensor model. Magn Reson Med 45:935,939, 2001. Published 2001 Wiley-Liss, Inc. [source]


Implementation of wave digital model in analysis of arbitrary nonuniform transmission lines

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 9 2007
Biljana P. Sto
Abstract An efficient method of analysis of nonuniform transmission lines (NTL), based on wave digital model composed of cascaded unit elements and two-port adaptors, is described. The proposed technique treats NTL as a cascade connection of equal-length or equal-delay uniform transmission lines. A simple algorithm for calculating transmission and input reflection coefficients is derived. Two application examples, proving the efficiency and response accuracy of the new technique, are given. © 2007 Wiley Periodicals, Inc. Microwave Opt Technol Lett 49: 2150,2153, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.22706 [source]


When to use the combined sensory index,

MUSCLE AND NERVE, Issue 8 2001
Matthew P. Kaul MD
Abstract A recently developed electrodiagnostic technique, the combined sensory index (CSI), has been recommended for its greater sensitivity in diagnosing carpal tunnel syndrome (CTS). The CSI requires a greater number of procedures and therefore involves greater time, cost, and patient discomfort than does conventional electrodiagnostic testing. The CSI is composed of three commonly used electrodiagnostic techniques. There is a close correlation between the components of the CSI, and in most cases, all three components of the CSI are in agreement. We performed a study to develop and validate an algorithm that could be used to identify subsets of patients with CTS in whom CSI testing is particularly useful. Subjects were consecutive outpatient veterans referred by a heterogeneous group of specialists and generalists for electrodiagnostic evaluation of paresthesias in a median distribution with nocturnal exacerbation of symptoms. The CSI served as our gold standard. Using our simple algorithm, we found that in approximately 95% of cases, it was unnecessary to perform the CSI. This management strategy improves patient comfort and reduces electrodiagnostic cost while identifying the minority of patients for whom the CSI is indicated. © 2001 John Wiley & Sons, Inc. Muscle Nerve 24: 1078,1082, 2001 [source]


Dynamic inventory management with cash flow constraints

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2008
Xiuli Chao
Abstract In this article, we consider a classic dynamic inventory control problem of a self-financing retailer who periodically replenishes its stock from a supplier and sells it to the market. The replenishment decisions of the retailer are constrained by cash flow, which is updated periodically following purchasing and sales in each period. Excess demand in each period is lost when insufficient inventory is in stock. The retailer's objective is to maximize its expected terminal wealth at the end of the planning horizon. We characterize the optimal inventory control policy and present a simple algorithm for computing the optimal policies for each period. Conditions are identified under which the optimal control policies are identical across periods. We also present comparative statics results on the optimal control policy. © 2008 Wiley Periodicals, Inc. Naval Research Logistics 2008 [source]


A simple algorithm that proves half-integrality of bidirected network programming,

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2006
Ethan D. Bolker
Abstract In a bidirected graph, each end of each edge is independently oriented. We show how to express any column of the incidence matrix as a half-integral linear combination of any column basis, through a simplification, based on an idea of Bolker, of a combinatorial algorithm of Appa and Kotnyek. Corollaries are that the inverse of each nonsingular square submatrix has entries 0, , and ±1, and that a bidirected integral linear program has half-integral solutions. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 48(1), 36,38 2006 [source]


The ring grooming problem

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 3 2004
Timothy Y. Chow
Abstract The problem of minimizing the number of bidirectional SONET rings required to support a given traffic demand has been studied by several researchers. Here we study the related ring-grooming problem of minimizing the number of add/drop locations instead of the number of rings; in a number of situations this is a better approximation to the true equipment cost. Our main result is a new lower bound for the case of uniform traffic. This allows us to prove that a certain simple algorithm for uniform traffic is, in fact, a constant-factor approximation algorithm, and it also demonstrates that known lower bounds for the general problem,in particular, the linear programming relaxation,are not within a constant factor of the optimum. We also show that our results for uniform traffic extend readily to the more practically important case of quasi-uniform traffic. Finally, we show that if the number of nodes on the ring is fixed, then ring grooming is solvable in polynomial time; however, whether ring grooming is fixed-parameter tractable is still an open question. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 44(3), 194,202 2004 [source]


Efficient communication in unknown networks

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2001
Luisa Gargano
Abstract We consider the problem of disseminating messages in networks. We are interested in information dissemination algorithms in which machines operate independently without any knowledge of the network topology or size. Three communication tasks of increasing difficulty are studied. In blind broadcasting (BB), the goal is to communicate the source message to all nodes. In acknowledged blind broadcasting (ABB), the goal is to achieve BB and inform the source about it. Finally, in full synchronization (FS), all nodes must simultaneously enter the state terminated after receiving the source message. The algorithms should be efficient both in terms of the time required and the communication overhead they put on the network. We limit the latter by allowing every node to send a message to at most one neighbor in each round. We show that BB is achieved in time at most 2n in any n -node network and show networks in which time 2n , o(n) is needed. For ABB, we show algorithms working in time (2 + ,)n, for any fixed positive constant , and sufficiently large n. Thus, for both BB and ABB, our algorithms are close to optimal. Finally, we show a simple algorithm for FS working in time 3n and a more complicated algorithm which works in time 2.9n. The optimal time of full synchronization remains an open problem. © 2001 John Wiley & Sons, Inc. [source]


Optimal control of a revenue management system with dynamic pricing facing linear demand

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 6 2006
Fee-Seng Chou
Abstract This paper considers a dynamic pricing problem over a finite horizon where demand for a product is a time-varying linear function of price. It is assumed that at the start of the horizon there is a fixed amount of the product available. The decision problem is to determine the optimal price at each time period in order to maximize the total revenue generated from the sale of the product. In order to obtain structural results we formulate the decision problem as an optimal control problem and solve it using Pontryagin's principle. For those problems which are not easily solvable when formulated as an optimal control problem, we present a simple convergent algorithm based on Pontryagin's principle that involves solving a sequence of very small quadratic programming (QP) problems. We also consider the case where the initial inventory of the product is a decision variable. We then analyse the two-product version of the problem where the linear demand functions are defined in the sense of Bertrand and we again solve the problem using Pontryagin's principle. A special case of the optimal control problem is solved by transforming it into a linear complementarity problem. For the two-product problem we again present a simple algorithm that involves solving a sequence of small QP problems and also consider the case where the initial inventory levels are decision variables. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Complex quasiperiodic self-similar tilings: their parameterization, boundaries, complexity, growth and symmetry

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 3 2010
A. V. Shutov
A class of quasiperiodic tilings of the complex plane is discussed. These tilings are based on ,-expansions corresponding to cubic irrationalities. There are three classes of tilings: Q3, Q4 and Q5. These classes consist of three, four and five pairwise similar prototiles, respectively. A simple algorithm for construction of these tilings is considered. This algorithm uses greedy expansions of natural numbers on some sequence. Weak and strong parameterizations for tilings are obtained. Layerwise growth, the complexity function and the structure of fractal boundaries of tilings are studied. The parameterization of vertices and boundaries of tilings, and also similarity transformations of tilings, are considered. [source]


A Simple Algorithm for Designing Group Sequential Clinical Trials

BIOMETRICS, Issue 3 2001
David A. Schoenfeld
Summary. This article describes a simple algorithm for calculating probabilities associated with group sequential trials. This allows the choice of boundaries that may not be among those implemented in available software. [source]