Binary Variables (binary + variable)

Distribution by Scientific Domains


Selected Abstracts


A National Study of Efficiency for Dialysis Centers: An Examination of Market Competition and Facility Characteristics for Production of Multiple Dialysis Outputs

HEALTH SERVICES RESEARCH, Issue 3 2002
Hacer Ozgen
Objective. To examine market competition and facility characteristics that can be related to technical efficiency in the production of multiple dialysis outputs from the perspective of the industrial organization model. Study Setting. Freestanding dialysis facilities that operated in 1997 submitted cost report forms to the Health Care Financing Administration (HCFA), and offered all three outputs,outpatient dialysis, dialysis training, and home program dialysis. Data Sources. The Independent Renal Facility Cost Report Data file (IRFCRD) from HCFA was utilized to obtain information on output and input variables and market and facility features for 791 multiple-output facilities. Information regarding population characteristics was obtained from the Area Resources File. Study Design. Cross-sectional data for the year 1997 were utilized to obtain facility-specific technical efficiency scores estimated through Data Envelopment Analysis (DEA). A binary variable of efficiency status was then regressed against its market and facility characteristics and control factors in a multivariate logistic regression analysis. Principal Findings. The majority of the facilities in the sample are functioning technically inefficiently. Neither the intensity of market competition nor a policy of dialyzer reuse has a significant effect on the facilities' efficiency. Technical efficiency is significantly associated, however, with type of ownership, with the interaction between the market concentration of for-profits and ownership type, and with affiliations with chains of different sizes. Nonprofit and government-owned facilities are more likely than their for-profit counterparts to become inefficient producers of renal dialysis outputs. On the other hand, that relationship between ownership form and efficiency is reversed as the market concentration of for-profits in a given market increases. Facilities that are members of large chains are more likely to be technically inefficient. Conclusions. Facilities do not appear to benefit from joint production of a variety of dialysis outputs, which may explain the ongoing tendency toward single-output production. Ownership form does make a positive difference in production efficiency, but only in local markets where competition exists between nonprofit and for-profit facilities. The increasing inefficiency associated with membership in large chains suggests that the growing consolidation in the dialysis industry may not, in fact, be the strategy for attaining more technical efficiency in the production of multiple dialysis outputs. [source]


What makes a blockbuster?

MANAGERIAL AND DECISION ECONOMICS, Issue 6 2002
Economic analysis of film success in the United Kingdom
In this paper, we attempt to evaluate whether a film's commercial performance can be forecast. The statistical distribution of film revenues in the UK is examined and found to have unbounded variance. This undermines much of the existing work relating a film's performance to its identifiable attributes within an OLS model. We adopt De Vany and Walls' approach and transform the revenue data into a binary variable and estimate the probability that a film's revenue will exceed a given threshold value; in other words, the probability of a blockbuster. Furthermore, we provide a sensitivity analysis around these threshold values. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Simulating the spatial distribution of clay layer occurrence depth in alluvial soils with a Markov chain geostatistical approach

ENVIRONMETRICS, Issue 1 2010
Weidong Li
Abstract The spatial distribution information of clay layer occurrence depth (CLOD), particularly the spatial distribution maps of occurrence of clay layers at depths less than a certain threshold, in alluvial soils is crucial to designing appropriate plans and measures for precision agriculture and environmental management in alluvial plains. Markov chain geostatistics (MCG), which was proposed recently for simulating categorical spatial variables, can objectively decrease spatial uncertainty and consequently increase prediction accuracy in simulated results by using nonlinear estimators and incorporating various interclass relationships. In this paper, a MCG method was suggested to simulate the CLOD in a meso-scale alluvial soil area by encoding the continuous variable with several threshold values into binary variables (for single thresholds) or a multi-class variable (for all thresholds being considered together). Related optimal prediction maps, realization maps, and occurrence probability maps for all of these indicator-coded variables were generated. The simulated results displayed the spatial distribution characteristics of CLOD within different soil depths in the study area, which are not only helpful to understanding the spatial heterogeneity of clay layers in alluvial soils but also providing valuable quantitative information for precision agricultural management and environmental study. The study indicated that MCG could be a powerful method for simulating discretized continuous spatial variables. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Topology optimization by a neighbourhood search method based on efficient sensitivity calculations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2006
K. Svanberg
Abstract This paper deals with topology optimization of discretized load-carrying continuum structures, where the design of the structure is represented by binary design variables indicating material or void in the various finite elements. Efficient exact methods for discrete sensitivity calculations are developed. They utilize the fact that if just one or two binary variables are changed to their opposite binary values then the new stiffness matrix is essentially just a low-rank modification of the old stiffness matrix, even if some nodes in the structure may disappear or re-enter. As an application of these efficient sensitivity calculations, a new neighbourhood search method is presented, implemented, and applied on some test problems, one of them with 6912 nine-node finite elements where the von Mises stress in each non-void element is considered. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Dempster,Shafer models for object recognition and classification

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 3 2006
A.P. Dempster
We consider situations in which each individual member of a defined object set is characterized uniquely by a set of variables, and we propose models and associated methods that recognize or classify a newly observed individual. Inputs consist of uncertain observations on the new individual and on a memory bank of previously identified individuals. Outputs consist of uncertain inferences concerning degrees of agreement between the new object and previously identified objects or object classes, with inferences represented by Dempster,Shafer belief functions. We illustrate the approach using models constructed from independent simple support belief functions defined on binary variables. In the case of object recognition, our models lead to marginal belief functions concerning how well the new object matches objects in memory. In the classification model, we compute beliefs and plausibilities that the new object lies in defined subsets of an object set. When regarded as similarity measures, our belief and plausibility functions can be interpreted as candidate membership functions in the terminology of fuzzy logic. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 283,297, 2006. [source]


The effect of misclassification on the estimation of association: a review

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 2 2005
Michael Höfler
Abstract Misclassification, the erroneous measurement of one or several categorical variables, is a major concern in many scientific fields and particularly in psychiatric research. Even in rather simple scenarios, unless the misclassification probabilities are very small, a major bias can arise in estimating the degree of association assessed with common measures like the risk ratio and the odds ratio. Only in very special cases , for example, if misclassification takes place solely in one of two binary variables and is independent of the other variable (,non-differential misclassification') , is it guaranteed that the estimates are biased towards the null value (which is 1 for the risk ratio and the odds ratio). Furthermore, misclassification, if ignored, usually leads to confidence intervals that are too narrow. This paper reviews consequences of misclassification. A numerical example demonstrates the problem's magnitude for the estimation of the risk ratio in the easy case where misclassification takes place in the exposure variable, but not in the outcome. Moreover, uncertainty about misclassification can broaden the confidence intervals dramatically. The best way to overcome misclassification is to avoid it by design, but some statistical methods are useful for reducing bias if misclassification cannot be avoided. Copyright © 2005 Whurr Publishers Ltd. [source]


Global optimization of mixed-integer nonlinear problems

AICHE JOURNAL, Issue 9 2000
C. S. Adjiman
Two novel deterministic global optimization algorithms for nonconvex mixed-integer problems (MINLPs) are proposed, using the advances of the ,BB algorithm for nonconvex NLPs of Adjiman et al. The special structure mixed-integer ,BB algorithm (SMIN-,BB) addresses problems with nonconvexities in the continuous variables and linear and mixed-bilinear participation of the binary variables. The general structure mixed-integer ,BB algorithm (GMIN-,BB) is applicable to a very general class of problems for which the continuous relaxation is twice continuously differentiable. Both algorithms are developed using the concepts of branch-and-bound, but they differ in their approach to each of the required steps. The SMIN-,BB algorithm is based on the convex underestimation of the continuous functions, while the GMIN-,BB algorithm is centered around the convex relaxation of the entire problem. Both algorithms rely on optimization or interval-based variable-bound updates to enhance efficiency. A series of medium-size engineering applications demonstrates the performance of the algorithms. Finally, a comparison of the two algorithms on the same problems highlights the value of algorithms that can handle binary or integer variables without reformulation. [source]


Binary models for marginal independence

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2008
Mathias Drton
Summary., Log-linear models are a classical tool for the analysis of contingency tables. In particular, the subclass of graphical log-linear models provides a general framework for modelling conditional independences. However, with the exception of special structures, marginal independence hypotheses cannot be accommodated by these traditional models. Focusing on binary variables, we present a model class that provides a framework for modelling marginal independences in contingency tables. The approach that is taken is graphical and draws on analogies with multivariate Gaussian models for marginal independence. For the graphical model representation we use bidirected graphs, which are in the tradition of path diagrams. We show how the models can be parameterized in a simple fashion, and how maximum likelihood estimation can be performed by using a version of the iterated conditional fitting algorithm. Finally we consider combining these models with symmetry restrictions. [source]


The one-warehouse multiretailer problem with an order-up-to level inventory policy

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2010
uz Solyal
Abstract We consider a two-level system in which a warehouse manages the inventories of multiple retailers. Each retailer employs an order-up-to level inventory policy over T periods and faces an external demand which is dynamic and known. A retailer's inventory should be raised to its maximum limit when replenished. The problem is to jointly decide on replenishment times and quantities of warehouse and retailers so as to minimize the total costs in the system. Unlike the case in the single level lot-sizing problem, we cannot assume that the initial inventory will be zero without loss of generality. We propose a strong mixed integer program formulation for the problem with zero and nonzero initial inventories at the warehouse. The strong formulation for the zero initial inventory case has only T binary variables and represents the convex hull of the feasible region of the problem when there is only one retailer. Computational results with a state-of-the art solver reveal that our formulations are very effective in solving large-size instances to optimality. © 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source]


Capacitated lot-sizing and scheduling with parallel machines, back-orders, and setup carry-over

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2009
Daniel Quadt
Abstract We address the capacitated lot-sizing and scheduling problem with setup times, setup carry-over, back-orders, and parallel machines as it appears in a semiconductor assembly facility. The problem can be formulated as an extension of the capacitated lot-sizing problem with linked lot-sizes (CLSPL). We present a mixed integer (MIP) formulation of the problem and a new solution procedure. The solution procedure is based on a novel "aggregate model," which uses integer instead of binary variables. The model is embedded in a period-by-period heuristic and is solved to optimality or near-optimality in each iteration using standard procedures (CPLEX). A subsequent scheduling routine loads and sequences the products on the parallel machines. Six variants of the heuristic are presented and tested in an extensive computational study. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009 [source]


A branch and bound algorithm for computing optimal replacement policies in consecutive k -out-of- n -systems

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2002
James Flynn
Abstract This paper presents a branch and bound algorithm for computing optimal replacement policies in a discrete-time, infinite-horizon, dynamic programming model of a binary coherent system with n statistically independent components, and then specializes the algorithm to consecutive k -out-of- n systems. The objective is to minimize the long-run expected average undiscounted cost per period. (Costs arise when the system fails and when failed components are replaced.) An earlier paper established the optimality of following a critical component policy (CCP), i.e., a policy specified by a critical component set and the rule: Replace a component if and only if it is failed and in the critical component set. Computing an optimal CCP is a optimization problem with n binary variables and a nonlinear objective function. Our branch and bound algorithm for solving this problem has memory storage requirement O(n) for consecutive k -out-of- n systems. Extensive computational experiments on such systems involving over 350,000 test problems with n ranging from 10 to 150 find this algorithm to be effective when n , 40 or k is near n. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 288,302, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10017 [source]


Path inequalities for the vehicle routing problem with time windows

NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2007
Brian Kallehauge
Abstract In this paper we introduce a new formulation of the vehicle routing problem with time windows (VRPTW) involving only binary variables. The new formulation is based on the formulation of the asymmetric traveling salesman problem with time windows by Ascheuer et al. (Networks 36 (2000) 69,79) and has the advantage of avoiding additional variables and linking constraints. In the new formulation, time windows are modeled using path inequalities that eliminate time and capacity infeasible paths. We present a new class of strengthened path inequalities based on the polyhedral results obtained by Mak (Ph.D. Thesis, 2001) for a variant of the TSP. We study the VRPTW polytope and determine its dimension. We show that the lifted path inequalities are facet defining under certain assumptions. We also introduce precedence constraints in the context of the VRPTW. Computational experiments are performed with a branch and cut algorithm on the Solomon test problems with wide time windows. Based on results on 25-node problems, the outcome is promising compared to leading algorithms in the literature. In particular, we report a solution to a previously unsolved 50-node Solomon test problem R208. The conclusion is therefore that a polyhedral approach to the VRPTW is a viable alternative to the path formulation of Desrochers et al. (Oper Res 40 (1992), 342,354). © 2007 Wiley Periodicals, Inc. NETWORKS, Vol. 49(4), 273,293 2007 [source]