Home About us Contact | |||
Feasible Solution (feasible + solution)
Selected AbstractsPreference-Based Constrained Optimization with CP-NetsCOMPUTATIONAL INTELLIGENCE, Issue 2 2004Craig Boutilier Many artificial intelligence (AI) tasks, such as product configuration, decision support, and the construction of autonomous agents, involve a process of constrained optimization, that is, optimization of behavior or choices subject to given constraints. In this paper we present an approach for constrained optimization based on a set of hard constraints and a preference ordering represented using a CP-network,a graphical model for representing qualitative preference information. This approach offers both pragmatic and computational advantages. First, it provides a convenient and intuitive tool for specifying the problem, and in particular, the decision maker's preferences. Second, it admits an algorithm for finding the most preferred feasible (Pareto-optimal) outcomes that has the following anytime property: the set of preferred feasible outcomes are enumerated without backtracking. In particular, the first feasible solution generated by this algorithm is Pareto optimal. [source] Mobile Agent Computing Paradigm for Building a Flexible Structural Health Monitoring Sensor NetworkCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2010Bo Chen While sensor network approach is a feasible solution for structural health monitoring, the design of wireless sensor networks presents a number of challenges, such as adaptability and the limited communication bandwidth. To address these challenges, we explore the mobile agent approach to enhance the flexibility and reduce raw data transmission in wireless structural health monitoring sensor networks. An integrated wireless sensor network consisting of a mobile agent-based network middleware and distributed high computational power sensor nodes is developed. These embedded computer-based high computational power sensor nodes include Linux operating system, integrate with open source numerical libraries, and connect to multimodality sensors to support both active and passive sensing. The mobile agent middleware is built on a mobile agent system called Mobile-C. The mobile agent middleware allows a sensor network moving computational programs to the data source. With mobile agent middleware, a sensor network is able to adopt newly developed diagnosis algorithms and make adjustment in response to operational or task changes. The presented mobile agent approach has been validated for structural damage diagnosis using a scaled steel bridge. [source] Resource reservations with fuzzy requestsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2006T. Röblitz Abstract We present a scheme for reserving job resources with imprecise requests. Typical parameters such as the estimated runtime, the start time or the type or number of required CPUs need not be fixed at submission time but can be kept fuzzy in some aspects. Users may specify a list of preferences which guide the system in determining the best matching resources for the given job. Originally, the impetus for our work came from the need for efficient co-reservation mechanisms in the Grid where rigid constraints on multiple job components often make it difficult to find a feasible solution. Our method for handling fuzzy reservation requests gives the users more freedom to specify the requirements and it gives the Grid Reservation Service more flexibility to find optimal solutions. In the future, we will extend our methods to process co-reservations. We evaluated our algorithms with real workload traces from a large supercomputer site. The results indicate that our scheme greatly improves the flexibility of the solution process without having much affect on the overall workload of a site. From a user's perspective, only about 10% of the non-reservation jobs have a longer response time, and from a site administrator's view, the makespan of the original workload is extended by only 8% in the worst case. Copyright © 2006 John Wiley & Sons, Ltd. [source] Genome-wide DNA methylation profile of tissue-dependent and differentially methylated regions (T-DMRs) residing in mouse pluripotent stem cellsGENES TO CELLS, Issue 6 2010Shinya Sato DNA methylation profile, consisting of tissue-dependent and differentially methylated regions (T-DMRs), has elucidated tissue-specific gene function in mouse tissues. Here, we identified and profiled thousands of T-DMRs in embryonic stem cells (ESCs), embryonic germ cells (EGCs) and induced pluripotent stem cells (iPSCs). T-DMRs of ESCs compared with somatic tissues well illustrated gene function of ESCs, by hypomethylation at genes associated with CpG islands and nuclear events including transcriptional regulation network of ESCs, and by hypermethylation at genes for tissue-specific function. These T-DMRs in EGCs and iPSCs showed DNA methylation similar to ESCs. iPSCs, however, showed hypomethylation at a considerable number of T-DMRs that were hypermethylated in ESCs, suggesting existence of traceable progenitor epigenetic information. Thus, DNA methylation profile of T-DMRs contributes to the mechanism of pluripotency, and can be a feasible solution for identification and evaluation of the pluripotent cells. [source] Fast ping-pong arbitration for input,output queued packet switchesINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2001H. Jonathan Chao Abstract Input,output queued switches have been widely considered as the most feasible solution for large capacity packet switches and IP routers. In this paper, we propose a ping-pong arbitration scheme (PPA) for output contention resolution in input,output queued switches. The challenge is to develop a high speed and cost-effective arbitration scheme in order to maximize the switch throughput and delay performance for supporting multimedia services with various quality-of-service (QoS) requirements. The basic idea is to divide the inputs into groups and apply arbitration recursively. Our recursive arbiter is hierarchically structured, consisting of multiple small-size arbiters at each layer. The arbitration time of an n -input switch is proportional to log4,n/2, when we group every two inputs or every two input groups at each layer. We present a 256×256 terabit crossbar multicast packet switch using the PPA. The design shows that our scheme can reduce the arbitration time of the 256×256 switch to 11 gates delay, demonstrating the arbitration is no longer the bottleneck limiting the switch capacity. The priority handling in arbitration is also addressed. Copyright © 2001 John Wiley & Sons, Ltd. [source] A review on coal-to-liquid fuels and its coal consumptionINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 10 2010Mikael Höök Abstract Continued reliance on oil is unsustainable and this has resulted in interest in alternative fuels. Coal-to-liquids (CTL) can supply liquid fuels and have been successfully used in several cases, particularly in South Africa. This article reviews CTL theory and technology. Understanding the fundamental aspects of coal liquefaction technologies is vital for planning and policy-making, as future CTL systems will be integrated in a much larger global energy and fuel utilization system. Conversion ratios for CTL are generally estimated to be between 1 and 2 barrels/ton coal. This puts a strict limitation on future CTL capacity imposed by future coal production volumes, regardless of other factors such as economics, emissions or environmental concerns. Assuming that 10% of world coal production can be diverted to CTL, the contribution to liquid fuel supply will be limited to only a few mega barrels per day. This prevents CTL from becoming a viable mitigation plan for liquid fuel shortage on a global scale. However, it is still possible for individual nations to derive significant shares of their fuel supply from CTL, but those nations must also have access to equally significant coal production capacities. It is unrealistic to claim that CTL provides a feasible solution to liquid fuels shortages created by peak oil. For the most part, it can only be a minor contributor and must be combined with other strategies. Copyright © 2009 John Wiley & Sons, Ltd. [source] Observer design with guaranteed RMS gain for discrete-time LPV systems with Markovian jumpsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 6 2009Giuseppe C. Calafiore Abstract In this paper we consider the problem of designing state observers with guaranteed power-to-power (RMS) gain for a class of stochastic discrete-time linear systems that possess both measurable parameter variations and Markovian jumps in their dynamics. It is shown in the paper that an upper bound on the RMS gain of the observer can be characterized in terms of feasibility of a family of parameter-dependent linear matrix inequalities (LMIs). Any feasible solution to these LMIs can then be used to explicitly construct a parameter-varying jump observer that guarantees the desired performance level. This design framework is then specialized to a problem of state estimation for a linear parameter-varying plant whose state measurements are available through a lossy Bernoulli channel. Two numerical examples illustrate the results. Copyright © 2008 John Wiley & Sons, Ltd. [source] Robust fault estimation of uncertain systems using an LMI-based approachINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 18 2008Euripedes G. Nobrega Abstract General recent techniques in fault detection and isolation (FDI) are based on H, optimization methods to address the issue of robustness in the presence of disturbances, uncertainties and modeling errors. Recently developed linear matrix inequality (LMI) optimization methods are currently used to design controllers and filters, which present several advantages over the Riccati equation-based design methods. This article presents an LMI formulation to design full-order and reduced-order robust H, FDI filters to estimate the faulty input signals in the presence of uncertainty and model errors. Several cases are examined for nominal and uncertain plants, which consider a weight function for the disturbance and a reference model for the faults. The FDI LMI synthesis conditions are obtained based on the bounded real lemma for the nominal case and on a sufficient extension for the uncertain case. The conditions for the existence of a feasible solution form a convex problem for the full-order filter, which may be solved via recently developed LMI optimization techniques. For the reduced-order FDI filter, the inequalities include a non-convex constraint, and an alternating projections method is presented to address this case. The examples presented in this paper compare the simulated results of a structural model for the nominal and uncertain cases and show that a degree of conservatism exists in the robust fault estimation; however, more reliable solutions are achieved than the nominal design. Copyright © 2008 John Wiley & Sons, Ltd. [source] Tree search algorithm for assigning cooperating UAVs to multiple tasksINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2 2008Steven J. Rasmussen Abstract This paper describes a tree search algorithm for assigning cooperating homogeneous uninhabited aerial vehicles to multiple tasks. The combinatorial optimization problem is posed in the form of a decision tree, the structure of which enforces the required group coordination and precedence for cooperatively performing the multiple tasks. For path planning, a Dubin's car model is used so that the vehicles' constraint, of minimum turning radius, is taken into account. Due to the prohibitive computational complexity of the problem, exhaustive enumeration of all the assignments encoded in the tree is not feasible. The proposed optimization algorithm is initialized by a best-first search and candidate optimal solutions serve as a monotonically decreasing upper bound for the assignment cost. Euclidean distances are used for estimating the path length encoded in branches of the tree that have not yet been evaluated by the computationally intensive Dubin's optimization subroutine. This provides a lower bound for the cost of unevaluated assignments. We apply these upper and lower bounding procedures iteratively on active subsets within the feasible set, enabling efficient pruning of the solution tree. Using Monte Carlo simulations, the performance of the search algorithm is analyzed for two different cost functions and different limits on the vehicles' minimum turn radius. It is shown that the selection of the cost function and the limit have a considerable effect on the level of cooperation between the vehicles. The proposed deterministic search method can be applied on line to different sized problems. For small-sized problems, it provides the optimal solution. For large-sized problems, it provides an immediate feasible solution that improves over the algorithm's run time. When the proposed method is applied off line, it can be used to obtain the optimal solution, which can be used to evaluate the performance of other sub-optimal search methods. Copyright © 2007 John Wiley & Sons, Ltd. [source] An annotated bibliography of GRASP,Part II: ApplicationsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 2 2009Paola Festa Abstract A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. It is a multi-start or iterative process, in which each GRASP iteration consists of two phases, a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Since 1989, numerous papers on the basic aspects of GRASP, as well as enhancements to the basic metaheuristic, have appeared in the literature. GRASP has been applied to a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing. This is the second of two papers with an annotated bibliography of the GRASP literature from 1989 to 2008. In the companion paper, algorithmic aspects of GRASP are surveyed. In this paper, we cover the literature where GRASP is applied to scheduling, routing, logic, partitioning, location, graph theory, assignment, manufacturing, transportation, telecommunications, biology and related fields, automatic drawing, power systems, and VLSI design. [source] An annotated bibliography of GRASP , Part I: AlgorithmsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2009Paola Festa Abstract A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. It is a multi-start or iterative process, in which each GRASP iteration consists of two phases, a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Since 1989, numerous papers on the basic aspects of GRASP, as well as enhancements to the basic metaheuristic have appeared in the literature. GRASP has been applied to a wide range of combinatorial optimization problems, ranging from scheduling and routing to drawing and turbine balancing. This is the first of two papers with an annotated bibliography of the GRASP literature from 1989 to 2008. This paper covers algorithmic aspects of GRASP. [source] Comparison of linear-scaling semiempirical methods and combined quantum mechanical/molecular mechanical methods for enzymic reactions.JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 14 2002Abstract QM/MM methods have been developed as a computationally feasible solution to QM simulation of chemical processes, such as enzyme-catalyzed reactions, within a more approximate MM representation of the condensed-phase environment. However, there has been no independent method for checking the quality of this representation, especially for highly nonisotropic protein environments such as those surrounding enzyme active sites. Hence, the validity of QM/MM methods is largely untested. Here we use the possibility of performing all-QM calculations at the semiempirical PM3 level with a linear-scaling method (MOZYME) to assess the performance of a QM/MM method (PM3/AMBER94 force field). Using two model pathways for the hydride-ion transfer reaction of the enzyme dihydrofolate reductase studied previously (Titmuss et al., Chem Phys Lett 2000, 320, 169,176), we have analyzed the reaction energy contributions (QM, QM/MM, and MM) from the QM/MM results and compared them with analogous-region components calculated via an energy partitioning scheme implemented into MOZYME. This analysis further divided the MOZYME components into Coulomb, resonance and exchange energy terms. For the model in which the MM coordinates are kept fixed during the reaction, we find that the MOZYME and QM/MM total energy profiles agree very well, but that there are significant differences in the energy components. Most significantly there is a large change (,16 kcal/mol) in the MOZYME MM component due to polarization of the MM region surrounding the active site, and which arises mostly from MM atoms close to (<10 Å) the active-site QM region, which is not modelled explicitly by our QM/MM method. However, for the model where the MM coordinates are allowed to vary during the reaction, we find large differences in the MOZYME and QM/MM total energy profiles, with a discrepancy of 52 kcal/mol between the relative reaction (product,reactant) energies. This is largely due to a difference in the MM energies of 58 kcal/mol, of which we can attribute ,40 kcal/mol to geometry effects in the MM region and the remainder, as before, to MM region polarization. Contrary to the fixed-geometry model, there is no correlation of the MM energy changes with distance from the QM region, nor are they contributed by only a few residues. Overall, the results suggest that merely extending the size of the QM region in the QM/MM calculation is not a universal solution to the MOZYME- and QM/MM-method differences. They also suggest that attaching physical significance to MOZYME Coulomb, resonance and exchange components is problematic. Although we conclude that it would be possible to reparameterize the QM/MM force field to reproduce MOZYME energies, a better way to account for both the effects of the protein environment and known deficiencies in semiempirical methods would be to parameterize the force field based on data from DFT or ab initio QM linear-scaling calculations. Such a force field could be used efficiently in MD simulations to calculate free energies. © 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1314,1322, 2002 [source] Werkzeugbeschichtungen für die Trockenbearbeitung,MATERIALWISSENSCHAFT UND WERKSTOFFTECHNIK, Issue 10 2006E. Abele PVD coating; tribology; dry machining; tool wear Abstract Bei der Trockenbearbeitung wirkt an der Werkzeugschneide ein Belastungskollektiv aus mechanischen, thermischen und chemischen Einflussgrößen. Im Vergleich zur konventionellen Bearbeitung unter Verwendung von Vollstrahl-Emulsionskühlung erhöht die Trockenbearbeitung die auf den Schneidkeil wirkenden Belastungen. Eine sehr gute Möglichkeit die Schneide vor thermischen, abrasiven und tribo-oxidativem Verschleißangriff zu schützen stellt die Verwendung von PVD Beschichtungen dar. Neu entwickelte PVD Beschichtungen aus CrxAlyYzN, CrxAlyBzN und CrxAlySizN- wurden sowohl im tribologischen Modelltest als auch im realen Zerspanungstest auf ihre Eignung zur Trockenbearbeitung untersucht. In diesem Paper wird neben der verwendeten Beschichtungstechnologie detailliert auf die Schichteigenschaften eingegangen. Im Zusammenhang mit dem im Zerspanungstest gemessenen Verschleißverhalten und der Prozesskräfte werden anschließend Rückschlüsse auf das weitere Optimierungspotential dieser Schichtsysteme gezogen. Tool coatings for dry machining During dry machining a strain collective consisting of mechanical, thermal, and chemical loads is imposed upon the cutting edge. Compared to conventional machining using cooling lubrication fluids, the loads are increased in dry cutting. A feasible solution to protect the cutting edge from thermal wear, abrasion, and tribo-oxidation is the application of hard coatings. Newly developed CrxAlyYzN, CrxAlyBzN and CrxAlySizN PVD coatings were both evaluated in tribological model tests and machining tests concerning their suitability for dry cutting applications. Herein, the used coating technology and the coating properties are described in detail. The measured tool wear and the process forces give further hints for the optimization of the coating system. [source] Extreme point characterizations for infinite network flow problemsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2006H. Edwin Romeijn Abstract We study capacitated network flow problems with demands defined on a countably infinite collection of nodes having finite degree. This class of network flow models includes, for example, all infinite horizon deterministic dynamic programs with finite action sets, because these are equivalent to the problem of finding a shortest path in an infinite directed network. We derive necessary and sufficient conditions for flows to be extreme points of the set of feasible flows. Under an additional regularity condition met by all such problems with integer data, we show that a feasible solution is an extreme point if and only if it contains neither a cycle nor a doubly-infinite path consisting of free arcs (an arc is free if its flow is strictly between its upper and lower bounds). We employ this result to show that the extreme points can be characterized by specifying a basis. Moreover, we establish the integrality of extreme point flows whenever node demands and arc capacities are integer valued. We illustrate our results with an application to an infinite horizon economic lot-sizing problem. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 48(4), 209,222 2006 [source] The revisit of QoS routing based on non-linear Lagrange relaxationINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2007Gang Feng Abstract The development of efficient quality of service (QoS) routing algorithms in a high-speed networking or the next generation IP networking environment is a very important and at the same time very difficult task due to the need to provide divergent services with multiple QoS requirements. Recently, a heuristic algorithm H_MCOP, which is based on a non-linear Lagrange relaxation (NLR) technique, has been proposed to resolve the contradiction between the time complexity and the quality of solution. Even though H_MCOP has demonstrated outstanding capability of finding feasible solutions to the multi-path constrained (MCP) problem, it has not exploited the full capability that an NLR-based technique could offer. In this paper, we propose a new NLR-based heuristic called NLR_MCP, in which the search process is interpreted from a probability's perspective. Simulation results indicate that NLR_MCP can achieve a higher probability of finding feasible solutions than H_MCOP. We also verify that the performance improvement of a MCP heuristic has a tremendous impact on the performance of a higher level heuristic that uses a MCP heuristic as the basic step. Copyright © 2006 John Wiley & Sons, Ltd. [source] Highway alignment optimization through feasible gatesJOURNAL OF ADVANCED TRANSPORTATION, Issue 2 2007Min Wook Kang Abstract An efficient optimization approach, called feasible gate (FG), is developed to enhance the computation efficiency and solution quality of the previously developed highway alignment optimization (HAO) model. This approach seeks to realistically represent various user preferences and environmentally sensitive areas and consider them along with geometric design constraints in the optimization process. This is done by avoiding the generation of infeasible solutions that violate various constraints and thus focusing the search on the feasible solutions. The proposed method is simple, but improves significantly the model's computation time and solution quality. Such improvements are demonstrated with two test examples from a real road project. [source] Rates of Acute Care Admissions for Frail Older People Living with Met Versus Unmet Activity of Daily Living NeedsJOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 2 2006Laura P. Sands PhD OBJECTIVES: To determine whether older people who do not have help for their activity of daily living (ADL) disabilities are at higher risk for acute care admissions and whether entry into a program that provides for these needs decreases this risk. DESIGN: A longitudinal cohort study. SETTING: Thirteen nationwide sites for the Program of All-inclusive Care for the Elderly (PACE). PACE provides comprehensive medical and long-term care to community-living older adults. PARTICIPANTS: Two thousand nine hundred forty-three PACE enrollees with one or more ADL dependencies. MEASUREMENTS: Unmet needs were defined as the absence of paid or unpaid assistance for ADL disabilities before PACE enrollment. Hospital admissions in the 6 months before PACE enrollment and acute admissions in the first 6 weeks and the 7th through 12th weeks after enrollment were determined. RESULTS: Those who lived with unmet ADL needs before enrollment were more likely to have a hospital admission before PACE enrollment (odds ratio (OR)=1.28, 95% confidence interval (CI)=1.01,1.63) and an acute admission in the first 6 weeks after enrollment (OR=1.45, 95% CI=1.00,2.09) but not after 6 weeks of receiving PACE services (OR=0.86, 95% CI=0.53,1.40). CONCLUSION: Frail older people who live without needed help for their ADL disabilities have higher rates of admissions while they are living with unmet ADL needs but not after their needs are met. With state governments under increasing pressure to develop fiscally feasible solutions for caring for disabled older people, it is important that they be aware of the potential health consequences of older adults living without needed ADL assistance. [source] A novel search framework for multi-stage process scheduling with tight due datesAICHE JOURNAL, Issue 8 2010Yaohua He Abstract This article improves the original genetic algorithm developed by He and Hui (Chem Eng Sci. 2007; 62:1504,1527) and proposes a novel global search framework (GSF) for the large-size multi-stage process scheduling problems. This work first constructs a comprehensive set of position selection rules according to the impact factors analysis presented by He and Hui (in this publication in 2007), and then selects suitable rules for schedule synthesis. In coping with infeasibility emerging during the search, a penalty function is adopted to force the algorithm to approach the feasible solutions. The large-size problems with tight due dates are challenging to the current solution techniques. Inspired by the gradient used in numerical analysis, we treat the deviation existing among the computational tests of the algorithm as evolutionary gradient. Based on this concept, a GSF is laid out to fully utilize the search ability of the current algorithm. Numerical experiments indicate that the proposed search framework solves such problems with satisfactory solutions. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source] Efficient optimization strategies with constraint programming,AICHE JOURNAL, Issue 2 2010Prakash R. Kotecha Abstract In this article, we propose novel strategies for the efficient determination of multiple solutions for a single objective, as well as globally optimal pareto fronts for multiobjective, optimization problems using Constraint Programming (CP). In particular, we propose strategies to determine, (i) all the multiple (globally) optimal solutions of a single objective optimization problem, (ii) K -best feasible solutions of a single objective optimization problem, and (iii) globally optimal pareto fronts (including nonconvex pareto fronts) along with their multiple realizations for multiobjective optimization problems. It is shown here that the proposed strategy for determining K -best feasible solutions can be tuned as per the requirement of the user to determine either K -best distinct or nondistinct solutions. Similarly, the strategy for determining globally optimal pareto fronts can also be modified as per the requirement of the user to determine either only the distinct set of pareto points or determine the pareto points along with all their multiple realizations. All the proposed techniques involve appropriately modifying the search techniques and are shown to be computationally efficient in terms of not requiring successive re-solving of the problem to obtain the required solutions. This work therefore convincingly addresses the issue of efficiently determining globally optimal pareto fronts; in addition, it also guarantees the determination of all the possible realizations associated with each pareto point. The uncovering of such solutions can greatly aid the designer in making informed decisions. The proposed approaches are demonstrated via two case studies, which are nonlinear, combinatorial optimization problems, taken from the area of sensor network design. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source] Employment, social inclusion and mental healthJOURNAL OF PSYCHIATRIC & MENTAL HEALTH NURSING, Issue 1 2000J. Evans bsc econ (hons) rmn dip (psychosocial management of psychosis) Whereas unemployment is clearly linked to mental health problems, employment can improve quality of life, mental health, social networks and social inclusion. Yet in the UK only 15% of people with serious mental health problems are employed , despite an overwhelming consensus from surveys, case studies and personal accounts that users want to work. This paper aims to challenge common misconceptions surrounding employment, work and mental health problems. Drawing on a range of research evidence and legislative guidance it discusses significant barriers to work and proposes feasible solutions. The need for mental health staff and services to become involved in the provision of work opportunities is considered, as is the vital role they can play in changing communities. The potency of work as a vehicle for improving the social inclusion and community tenure of people with mental health problems is highlighted. [source] Effective and tolerant wide-range solutions for ultra-wideband indented-disk monopole antennaMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 4 2006Chin-Ju Pan Abstract Solutions obtained from a wide range of parameters for the optimized design of the antenna are presented. Wide-bandwidth impedance matching can be achieved using a simply designed indented-disk monopole antenna for ultra-wideband (UWB) applications. The design is based on a genetic algorithm (GA)-based refined method to improve the solution to the UWB antenna design problem. A new optimization process is used to determine a wide range of feasible solutions rather than just the few solutions that can be obtained by trial and error. The wide range of solutions support the manufacturing tolerance effectively. © 2006 Wiley Periodicals, Inc. Microwave Opt Technol Lett 48: 693,695, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.21445 [source] Toward an improved legislative framework for China's land degradation controlNATURAL RESOURCES FORUM, Issue 1 2008Zhou Ke Abstract The Chinese government has recently been attaching increasing importance to the application of effective legal tools to tackle land degradation (LD) issues. Based on the concept of sustainable development, China began developing and reaping the benefits of environmental and natural resources legislation including LD control regulations in the 1990s. In the past three years, some central-western provinces in China have been implementing a "People's Republic of China/Global Environment Facility (PRC/GEF) Partnership on LD Control of Dryland Ecosystems", which is based on an integrated ecosystem management (IEM) approach. IEM is designed to achieve a balanced, scientific and participatory approach to natural resources management, which creates the potential to improve the quality of Chinese environmental law and policy procedures. The paper studies the existing Chinese national laws and regulations pertinent to LD control within 9 areas covering land, desertification, soil erosion, grassland, forest, water, agriculture, wild animals and plants, and environment protection in detail, against IEM principles and basic legal elements. The main objective is to identify problems and provide feasible solutions and recommendations for the improvement of the existing laws and regulations. The authors conclude that the development of an improved national legislative framework is essential if LD control is to be successfully achieved. The paper is partly based on Component 1 , Improving Policies, Laws and Regulations for Land Degradation Control under PRC/GEF Partnership on Land Degradation in Dryland Ecosystems (TA 4357). [source] Next-Generation Architecture to Support Simulation-Based AcquisitionNAVAL ENGINEERS JOURNAL, Issue 4 2000Dr. B. Chadha ABSTRACT The ability to make good design decisions early is a significant driver for simulation-based acquisition to effectively lower life-cycle cost and cycle time. Building virtual prototypes, enabling one to analyze the impact of decisions, achieves effective simulation-based acquisition processes. Virtual prototypes need to support a comprehensive set of analyses that will be performed on the product; hence, all aspects of product data and behavior need to be represented. Building virtual prototypes of complex systems being designed by a multi-organizational team requires new architectural concepts and redesigned processes. Implementation of these new architectures is complex and leveraging commercial technologies is necessary to achieve feasible solutions. One must also carefully consider the state of the current commercial technologies and frameworks as well as the organizational and cultural aspects of organizations that use these systems. This paper describes key architectural principles that one must address for a cost-effective implementation. The paper then discusses key architectural concepts and trade-offs that are necessary to support virtual prototypes of complex systems. [source] A branch-and-price algorithm for parallel machine scheduling with time windows and job prioritiesNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2006Jonathan F. Bard Abstract This paper presents a branch-and-price algorithm for scheduling n jobs on m nonhomogeneous parallel machines with multiple time windows. An additional feature of the problem is that each job falls into one of , priority classes and may require two operations. The objective is to maximize the weighted number of jobs scheduled, where a job in a higher priority class has "infinitely" more weight or value than a job in a lower priority class. The methodology makes use of a greedy randomized adaptive search procedure (GRASP) to find feasible solutions during implicit enumeration and a two-cycle elimination heuristic when solving the pricing subproblems. Extensive computational results are presented based on data from an application involving the use of communications relay satellites. Many 100-job instances that were believed to be beyond the capability of exact methods, were solved within minutes. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source] Cycle-based algorithms for multicommodity network flow problems with separable piecewise convex costsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 2 2008Mauricio C. de Souza Abstract We present cycle-based algorithmic approaches to find local minima of a nonconvex and nonsmooth model for capacity expansion of a network supporting multicommodity flows. By exploiting complete optimality conditions for local minima, we give the convergence analysis of the negative-cost cycle canceling method. The cycle canceling method is embedded in a tabu search strategy to explore the solution space beyond the first local optimum. Reaching a local optimum, the idea is to accept a cost-increasing solution by pushing flow around a positive-cost cycle, and then to make use of the cycle cancelling method incorporating tabu search memory structures to find high quality local optima. Computational experiments on instances of the literature show that the tabu search algorithm can significantly improve feasible solutions obtained by the local optimization procedure, and it outperforms the capacity and flow assignment heuristic in terms of solution quality. © 2007 Wiley Periodicals, Inc. NETWORKS, 2008 [source] Upper bounds for single-source uncapacitated concave minimum-cost network flow problemsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2003Dalila B. M. M. Fontes Abstract In this paper, we describe a heuristic algorithm based on local search for the Single-Source Uncapacitated (SSU) concave Minimum-Cost Network Flow Problem (MCNFP). We present a new technique for creating different and informed initial solutions to restart the local search, thereby improving the quality of the resulting feasible solutions (upper bounds). Computational results on different classes of test problems indicate the effectiveness of the proposed method in generating basic feasible solutions for the SSU concave MCNFP very near to a global optimum. A maximum upper bound percentage error of 0.07% is reported for all problem instances for which an optimal solution has been found by a branch-and-bound method. © 2003 Wiley Periodicals, Inc. [source] ROBUST OUTPUT FEEDBACK CONTROLLER DESIGN WITH COVARIANCE AND DISC CLOSED-LOOP POLE CONSTRAINTSASIAN JOURNAL OF CONTROL, Issue 3 2005Li Yu ABSTRACT This paper is concerned with the problem of robust output feedback controller design for a class of linear discrete-time systems with normbounded uncertainty. The objective is to design a controller such that the closed-loop poles are assigned within a specified disc and the steady regulated output covariance is guaranteed to be less than a given upper bound. Using a linear matrix inequality (LMI) approach, the existence conditions of such controllers are derived, and a parametrized characterization of a set of desired controllers (if they exist) is presented in terms of the feasible solutions to a set of LMIs. A procedure is given to select a suitable output feedback controller that minimizes the desired control effort. [source] |