Algorithms

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Algorithms

  • adaptive algorithms
  • allocation algorithms
  • approximation algorithms
  • assignment algorithms
  • carlo algorithms
  • classification algorithms
  • clinical algorithms
  • complex algorithms
  • computational algorithms
  • computer algorithms
  • control algorithms
  • correction algorithms
  • design algorithms
  • detection algorithms
  • developed algorithms
  • diagnostic algorithms
  • different algorithms
  • efficient algorithms
  • estimation algorithms
  • evolutionary algorithms
  • exact algorithms
  • existing algorithms
  • fast algorithms
  • fusion algorithms
  • genetic algorithms
  • heuristic algorithms
  • identification algorithms
  • image processing algorithms
  • integration algorithms
  • iterative algorithms
  • learning algorithms
  • machine learning algorithms
  • monte carlo algorithms
  • new algorithms
  • novel algorithms
  • numerical algorithms
  • optimal algorithms
  • optimization algorithms
  • other algorithms
  • parallel algorithms
  • practical algorithms
  • processing algorithms
  • programming algorithms
  • proposed algorithms
  • published algorithms
  • reconstruction algorithms
  • recursive algorithms
  • resource allocation algorithms
  • routing algorithms
  • scheduling algorithms
  • scoring algorithms
  • search algorithms
  • selection algorithms
  • several algorithms
  • solution algorithms
  • standard algorithms
  • time algorithms
  • tracking algorithms
  • training algorithms
  • treatment algorithms
  • used algorithms
  • well-known algorithms

  • Terms modified by Algorithms

  • algorithms used

  • Selected Abstracts


    APPLYING MACHINE LEARNING TO LOW-KNOWLEDGE CONTROL OF OPTIMIZATION ALGORITHMS

    COMPUTATIONAL INTELLIGENCE, Issue 4 2005
    Tom Carchrae
    This paper addresses the question of allocating computational resources among a set of algorithms to achieve the best performance on scheduling problems. Our primary motivation in addressing this problem is to reduce the expertise needed to apply optimization technology. Therefore, we investigate algorithm control techniques that make decisions based only on observations of the improvement in solution quality achieved by each algorithm. We call our approach "low knowledge" since it does not rely on complex prediction models, either of the problem domain or of algorithm behavior. We show that a low-knowledge approach results in a system that achieves significantly better performance than all of the pure algorithms without requiring additional human expertise. Furthermore the low-knowledge approach achieves performance equivalent to a perfect high-knowledge classification approach. [source]


    EVALUATION OF NUMERICAL ALGORITHMS FOR THE INSTRUMENTAL MEASUREMENT OF BOWL-LIFE AND CHANGES IN TEXTURE OVER TIME FOR READY-TO-EAT BREAKFAST CEREALS

    JOURNAL OF TEXTURE STUDIES, Issue 6 2002
    C. M. GREGSON
    ABSTRACT Cornflakes were immersed in milk, rapidly drained and compressed in a TA. XT2i texture analyser (Stable Micro Systems, UK) fitted with an Ottawa Cell. The data were analyzed numerically yielding nine instrumental crispness parameters. Bowl-life was determined using an untrained sensory panel. Three models (Weibull, exponential and modified exponential) successfully modeled the change in mechanical properties as a function of immersion time. An instrumental method of measuring bowl-life is described that measures peak force at a range of immersion times and models the data with the Weibull equation. This method may be a valuable asset to the breakfast cereals industry. [source]


    REVERSIBLE JUMP MARKOV CHAIN MONTE CARLO METHODS AND SEGMENTATION ALGORITHMS IN HIDDEN MARKOV MODELS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2010
    R. Paroli
    Summary We consider hidden Markov models with an unknown number of regimes for the segmentation of the pixel intensities of digital images that consist of a small set of colours. New reversible jump Markov chain Monte Carlo algorithms to estimate both the dimension and the unknown parameters of the model are introduced. Parameters are updated by random walk Metropolis,Hastings moves, without updating the sequence of the hidden Markov chain. The segmentation (i.e. the estimation of the hidden regimes) is a further aim and is performed by means of a number of competing algorithms. We apply our Bayesian inference and segmentation tools to digital images, which are linearized through the Peano,Hilbert scan, and perform experiments and comparisons on both synthetic images and a real brain magnetic resonance image. [source]


    Human motion reconstruction from monocular images using genetic algorithms

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
    Jianhui Zhao
    Abstract This paper proposed an optimization approach for human motion recovery from the un-calibrated monocular images containing unlimited human movements. A 3D skeleton human model based on anatomy knowledge is employed with encoded biomechanical constraints for the joints. Energy Function is defined to represent the deviations between projection features and extracted image features. Reconstruction procedure is developed to adjust joints and segments of the human body into their proper positions. Genetic Algorithms are adopted to find the optimal solution effectively in the high dimensional parameter space by simultaneously considering all the parameters of the human model. The experimental results are analysed by Deviation Penalty. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Application of Visual Analytics for Thermal State Management in Large Data Centres

    COMPUTER GRAPHICS FORUM, Issue 6 2010
    M. C. Hao
    I.3.3 [Computer Graphics]: Picture/Image Generation,Display Algorithms; H.5.0 [Information Systems]: Information Interfaces and Presentation,General Abstract Today's large data centres are the computational hubs of the next generation of IT services. With the advent of dynamic smart cooling and rack level sensing, the need for visual data exploration is growing. If administrators know the rack level thermal state changes and catch problems in real time, energy consumption can be greatly reduced. In this paper, we apply a cell-based spatio-temporal overall view with high-resolution time series to simultaneously analyze complex thermal state changes over time across hundreds of racks. We employ cell-based visualization techniques for trouble shooting and abnormal state detection. These techniques are based on the detection of sensor temperature relations and events to help identify the root causes of problems. In order to optimize the data centre cooling system performance, we derive new non-overlapped scatter plots to visualize the correlations between the temperatures and chiller utilization. All these techniques have been used successfully to monitor various time-critical thermal states in real-world large-scale production data centres and to derive cooling policies. We are starting to embed these visualization techniques into a handheld device to add mobile monitoring capability. [source]


    Projective Texture Mapping with Full Panorama

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dongho Kim
    Projective texture mapping is used to project a texture map onto scene geometry. It has been used in many applications, since it eliminates the assignment of fixed texture coordinates and provides a good method of representing synthetic images or photographs in image-based rendering. But conventional projective texture mapping has limitations in the field of view and the degree of navigation because only simple rectangular texture maps can be used. In this work, we propose the concept of panoramic projective texture mapping (PPTM). It projects cubic or cylindrical panorama onto the scene geometry. With this scheme, any polygonal geometry can receive the projection of a panoramic texture map, without using fixed texture coordinates or modeling many projective texture mapping. For fast real-time rendering, a hardware-based rendering method is also presented. Applications of PPTM include panorama viewer similar to QuicktimeVR and navigation in the panoramic scene, which can be created by image-based modeling techniques. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms; I.3.7 [Computer Graphics]: Color, Shading, Shadowing, and Texture [source]


    Incremental Updates for Rapid Glossy Global Illumination

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    Xavier Granier
    We present an integrated global illumination algorithm including non-diffuse light transport which can handle complex scenes and enables rapid incremental updates. We build on a unified algorithm which uses hierarchical radiosity with clustering and particle tracing for diffuse and non-diffuse transport respectively. We present a new algorithm which chooses between reconstructing specular effects such as caustics on the diffuse radiosity mesh, or special purpose caustic textures, when high frequencies are present. Algorithms are presented to choose the resolution of these textures and to reconstruct the high-frequency non-diffuse lighting effects. We use a dynamic spatial data structure to restrict the number of particles re-emitted during the local modifications of the scene. By combining this incremental particle trace with a line-space hierarchy for incremental update of diffuse illumination, we can locally modify complex scenes rapidly. We also develop an algorithm which, by permitting slight quality degradation during motion, achieves quasi-interactive updates. We present an implementation of our new method and its application to indoors and outdoors scenes. [source]


    Integrating Messy Genetic Algorithms and Simulation to Optimize Resource Utilization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2009
    Tao-ming Cheng
    Various resource distribution modeling scenarios were tested in simulation to determine their system performances. MGA operations were then applied in the selection of the best resource utilization schemes based on those performances. A case study showed that this new modeling mechanism, along with the implemented computer program, could not only ease the process of developing optimal resource utilization, but could also improve the system performance of the simulation model. [source]


    Comparison of Two Evolutionary Algorithms for Optimization of Bridge Deck Repairs

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 8 2006
    Hatem Elbehairy
    These decisions, however, represent complex optimization problems that traditional optimization techniques are often unable to solve. This article introduces an integrated model for bridge deck repairs with detailed life cycle costs of network-level and project-level decisions. Two evolutionary-based optimization techniques that are capable of handling large-size problems, namely Genetic Algorithms and Shuffled Frog Leaping, are then applied on the model to optimize maintenance and repair decisions. Results of both techniques are compared on case study problems with different numbers of bridges. Based on the results, the benefits of the bridge deck management system are illustrated along with various strategies to improve optimization performance. [source]


    Dynamic Optimal Traffic Assignment and Signal Time Optimization Using Genetic Algorithms

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2004
    H. R. Varia
    A simulation-based approach is employed for the case of multiple-origin-multiple-destination traffic flows. The artificial intelligence technique of genetic algorithms (GAs) is used to minimize the overall travel cost in the network with fixed signal timings and optimization of signal timings. The proposed method is applied to the example network and results are discussed. It is concluded that GAs allow the relaxation of many of the assumptions that may be needed to solve the problem analytically by traditional methods. [source]


    Genetic Algorithms for Optimal Urban Transit Network Design

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2003
    Partha Chakroborty
    This article attempts to highlight the effectiveness of genetic algorithm (GA),based procedures in solving the urban transit network design problem (UTNDP). The article analyzes why traditional methods have problems in solving the UTNDP. The article also suggests procedures to alleviate these problems using GA,based optimization technique. The thrust of the article is three,fold: (1) to show the effectiveness of GAs in solving the UTNDP, (2) to identify features of the UTNDP that make it a difficult problem for traditional techniques, and (3) to suggest directions, through the presentation of GA,based methodologies for the UTNDP, for the development of GA,based procedures for solving other optimization problems having features similar to the UTNDP. [source]


    Scene Graph and Frame Update Algorithms for Smooth and Scalable 3D Visualization of Simulated Construction Operations

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2002
    Vineet R. Kamat
    One of the prime reasons inhibiting the widespread use of discrete-event simulation in construction planning is the absence of appropriate visual communication tools. Visualizing modeled operations in 3D is arguably the best form of communicating the logic and the inner working of simulation models and can be of immense help in establishing the credibility of analyses. New software development technologies emerge at incredible rates that allow engineers and scientists to create novel, domain-specific applications. The authors capitalized on a computer graphics technology based on the concept of the scene graph to design and implement a general-purpose 3D visualization system that is simulation and CAD-software independent. This system, the Dynamic Construction Visualizer, enables realistic visualization of modeled construction operations and the resulting products and can be used in conjunction with a wide variety of simulation tools. This paper describes the scene graph architecture and the frame updating algorithms used in designing the Dynamic Construction Visualizer. [source]


    Using GIS, Genetic Algorithms, and Visualization in Highway Development

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2001
    Manoj K. Jha
    A model for highway development is presented, which uses geographic information systems (GIS), genetic algorithms (GA), and computer visualization (CV). GIS serves as a repository of geographic information and enables spatial manipulations and database management. GAs are used to optimize highway alignments in a complex search space. CV is a technique used to convey the characteristics of alternative solutions, which can be the basis of decisions. The proposed model implements GIS and GA to find an optimized alignment based on the minimization of highway costs. CV is implemented to investigate the effects of intangible parameters, such as unusual land and environmental characteristics not considered in optimization. Constrained optimization using GAs may be performed at subsequent stages if necessary using feedback received from CVs. Implementation of the model in a real highway project from Maryland indicates that integration of GIS, GAs, and CV greatly enhances the highway development process. [source]


    Enhancing Neural Network Traffic Incident-Detection Algorithms Using Wavelets

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2001
    A. Samant
    Researchers have presented freeway traffic incident-detection algorithms by combining the adaptive learning capability of neural networks with imprecision modeling capability of fuzzy logic. In this article it is shown that the performance of a fuzzy neural network algorithm can be improved through preprocessing of data using a wavelet-based feature-extraction model. In particular, the discrete wavelet transform (DWT) denoising and feature-extraction model proposed by Samant and Adeli (2000) is combined with the fuzzy neural network approach presented by Hsiao et al. (1994). It is shown that substantial improvement can be achieved using the data filtered by DWT. Use of the wavelet theory to denoise the traffic data increases the incident-detection rate, reduces the false-alarm rate and the incident-detection time, and improves the convergence of the neural network training algorithm substantially. [source]


    Preliminary Highway Design with Genetic Algorithms and Geographic Information Systems

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
    Jyh-Cherng Jong
    A method that integrates geographic information systems (GIS) with genetic algorithms (GAs) for optimizing horizontal highway alignments between two given end points is presented in this article. The proposed approach can be used to optimize alignments in highly irregular geographic spaces. The resulting alignments are smooth and satisfy minimum-radius constraints, as required by highway design standards. The objective function in the proposed model considers land-acquisition cost, environmental impacts such as wetlands and flood plains, length-dependent costs (which are proportional to the alignment length), and user costs. A numerical example based on a real map is employed to demonstrate application of the proposed model to the preliminary design of horizontal alignments. [source]


    An algorithm for the pharmacological treatment of depression

    ACTA PSYCHIATRICA SCANDINAVICA, Issue 3 2010
    J. Spijker
    Spijker J, Nolen WA. An algorithm for the pharmacological treatment of depression. Objective:, Non-response to treatment with antidepressants (AD) is a clinical problem. Method:, The algorithm for pharmacological treatment of the Dutch multidisciplinary guideline for depression is compared with four other algorithms. Results:, The Dutch algorithm consists of five subsequent steps. Treatment is started with one out of many optional ADs (step 1); in case of non-response after 4,10 weeks, best evidence is for switching to another AD (step 2); next step is augmentation with lithium as the best option (step 3); the next step is a monoamine oxidase inhibitor (MAOI) (step 4); and finally electroconvulsive therapy (step 5). There are major differences with other algorithms regarding timing of augmentation step, best agents for augmentation and role of MAOI. Conclusion:, Algorithms for AD treatment vary according to national and local preferences. Although the evidence for most of the treatment strategies is rather meagre, an AD algorithm appears to be an useful instrument in clinical practice. [source]


    Algorithms for time synchronization of wireless structural monitoring sensors

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2005
    Ying Lei
    Abstract Dense networks of wireless structural health monitoring systems can effectively remove the disadvantages associated with current wire-based sparse sensing systems. However, recorded data sets may have relative time-delays due to interference in radio transmission or inherent internal sensor clock errors. For structural system identification and damage detection purposes, sensor data require that they are time synchronized. The need for time synchronization of sensor data is illustrated through a series of tests on asynchronous data sets. Results from the identification of structural modal parameters show that frequencies and damping ratios are not influenced by the asynchronous data; however, the error in identifying structural mode shapes can be significant. The results from these tests are summarized in Appendix A. The objective of this paper is to present algorithms for measurement data synchronization. Two algorithms are proposed for this purpose. The first algorithm is applicable when the input signal to a structure can be measured. The time-delay between an output measurement and the input is identified based on an ARX (auto-regressive model with exogenous input) model for the input,output pair recordings. The second algorithm can be used for a structure subject to ambient excitation, where the excitation cannot be measured. An ARMAV (auto-regressive moving average vector) model is constructed from two output signals and the time-delay between them is evaluated. The proposed algorithms are verified with simulation data and recorded seismic response data from multi-story buildings. The influence of noise on the time-delay estimates is also assessed. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Validity of Revised Doppler Echocardiographic Algorithms and Composite Clinical and Angiographic Data in Diagnosis of Diastolic Dysfunction

    ECHOCARDIOGRAPHY, Issue 10 2006
    Kofo O. Ogunyankin M.D.
    Background: Commonly used echocardiographic indices for grading diastolic function predicated on mitral inflow Doppler analysis have a poor diagnostic concordance and discriminatory value. Even when combined with other indices, significant overlap prevents a single group assignment for many subjects. We tested the relative validity of echocardiographic and clinical algorithms for grading diastolic function in patients undergoing cardiac catheterization. Method: Patients (n = 115), had echocardiograms immediately prior to measuring left ventricular (LV) diastolic (pre-A, mean, end-diastolic) pressures. Diastolic function was classified into the traditional four stages, and into three stages using a new classification that obviates the pseudonormal class. Summative clinical and angiographic data were used in a standardized fashion to classify each patient according to the probability for abnormal diastolic function. Measured LV diastolic pressure in each patient was compared with expected diastolic pressures based on the clinical and echocardiographic classifications. Result: The group means of the diastolic pressures were identical in patients stratified by four-stage or three-stage echocardiographic classifications, indicating that both classifications schemes are interchangeable. When severe diastolic dysfunction is diagnosed by the three-stage classification, 88% and 12%, respectively, were clinically classified as high and intermediate probability, and the mean LV pre-A pressures was >12 mmHg (P < 0.005). Conversely, the mean LV pre-A pressure in the clinical low probability or echocardiographic normal groups was <11 mmHg. Conclusion: Use of a standardized clinical algorithm to define the probability of diastolic function identifies patients with elevated LV filing pressure to the same extent as echocardiographic methods. [source]


    The Evaluation Method of Smoothing Algorithms in Voltammetry

    ELECTROANALYSIS, Issue 22 2003
    Malgorzata Jakubowska
    Abstract The criterion for testing the influence of smoothing algorithms for the relevant parameters considered in analytical experiment is presented. The proposed approach assumes that the improvement of the whole set of measured curves should be considered. The calibration curve parameters with confidence intervals, correlation coefficient, detection limit, signal to noise ratio and parameters of recovery function are utilized for the evaluation. Performance of evaluation method is presented for several kinds of experimental noises. [source]


    Synergistic interaction of endocrine-disrupting chemicals: Model development using an ecdysone receptor antagonist and a hormone synthesis inhibitor

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2004
    Xueyan Mu
    Abstract Endocrine toxicants can interfere with hormone signaling through various mechanisms. Some of these mechanisms are interrelated in a manner that might result in synergistic interactions. Here we tested the hypothesis that combined exposure to chemicals that inhibit hormone synthesis and that function as hormone receptor antagonists would result in greater-than-additive toxicity. This hypothesis was tested by assessing the effects of the ecdysteroid-synthesis inhibitor fenarimol and the ecdysteroid receptor antagonist testosterone on ecdysteroid-regulated development in the crustacean Daphnia magna. Both compounds were individually characterized for effects on the development of isolated embryos. Fenarimol caused late developmental abnormalities, consistent with its effect on offspring-derived ecdysone in the maturing embryo. Testosterone interfered with both early and late development of embryos, consistent with its ability to inhibit ecdysone provided by maternal transfer (responsible for early developmental events) or de novo ecdysone synthesis (responsible for late developmental events). We predicted that, by decreasing endogenous levels of hormone, fenarimol would enhance the likelihood of testosterone binding to and inhibiting the ecdysone receptor. Indeed, fenarimol enhanced the toxicity of testosterone, while testosterone had no effect on the toxicity of fenarimol. Algorithms were developed to predict the toxicity of combinations of these two compounds based on independent joint action (IJA) alone as well as IJA with fenarimol-on-testosterone synergy (IJA+SYN). The IJA+SYN model was highly predictive of the experimentally determined combined effects of the two compounds. These results demonstrate that some endocrine toxicants can synergize, and this synergy can be accurately predicted. [source]


    Dipole Tracing Examination for the Electric Source of Photoparoxysmal Response Provoked by Half-Field Stimulation

    EPILEPSIA, Issue 2000
    Kazuhiko Kobayashi
    Purpose: Dipole tracing (DT) is a computer-aidcd noninvasive method used to estimate the location of epileptic discharges from the scalp EEG. In DT equivalent current dipoles (ECDs), which rcflcct the electric source in the brain, are rcsponsible for the potential distribution on the scalp EEC. Thercfore, the DT method is useful to estimatc the focal paroxysmal discharges. In this study we examined the location of the clectric source of photoparoxysmal response (PPR) using scalpskull-brain dipolc tracing (SSB-DT) after hal[-field stimulation, which produced focal PPR on the scalp EEG. Methods: We studied 4 cases of photoscnsitive epilepsy. Wc performed 20 Hz red flicker and flickcring dot pattern half-ficld stimulation to provoke PPR. In this method, the loci of gcnerators corresponding to the paroxysmal discharges were estimated as ECDs by I - and 2-dipole analyses. Each location of the ECDs was estimated by iterative calculation. Algorithms minimizing the squarcd difference betwccn the electrical potentials recorded from the scalp EEG and those calculated theoretically from the voluntary dipoles were uscd. In the SSB model, the scalp shell was reconstructed from the helmet mcasurements, and the shapc of the skull and brain was 3-dimcnsionally reconstructed from CT images. A dipolarity larger than 98% w the accuracy of the estimation. We recorded thcir 2 I channel monopolar scalp EEG. Each spike was sampled analyzed at 10 points around the peaks of at least 10 spikes in each patient using the SSB-DT method. The ECDs were then supcrimposed on thc MRI of each palient to idcntify the more cxact anatomical region. Results: This study showed the location of cach focus and a dipolarity of greater than 98% in all cases, although the results from the 2-dipole method showed scattered location. We considered that the analyzed signals were generated from single source. PPR was elicitcd cross-lateral to the field stimulated. By red flicker half-field stimulation, EEG revealed eithcr focal spikes and waves in the contralatcral occipital, temporo-occipitel region, or diffuse spikes and wave complex bursts, sccn dominantly at the contralateral hcmisphere. The supcrimposed ESDs on MRI were located at the occipital or inferior temporal lobe. PPR, provoked by flickering dot pattern half-field stimulation, werc focal spikes and waves, mainly in the occipital, parieto-occipital region, or diffuse spikes and wave complcx bursts, seen dominantly at thc contralateral hcmiaphere. The ECDs of their PPRs were located in the occipital, inferior temporal, or inferior pirietal lobules on MRI. Conclusion: Our findings suggest that the inferior temporal and inferior parictal lobules which are important for the processing sequence of the visual system in addition to the occipital lobc, might he responsible for thc mechanism of PPR by half-ficld stimulation, espccially for electric source expansion. [source]


    Self-Regulated Learning in a TELE at the Université de Technologie de Compičgne: an analysis from multiple perspectives

    EUROPEAN JOURNAL OF EDUCATION, Issue 3-4 2006
    PHILIPPE TRIGANO
    Self-regulation has become a very important topic in the field of learning and instruction. At the same time, the introduction of new technologies in the field of Information and Communication Technologies (ICT) has made it possible to create rich Technology-Enhanced Learning Environments (TELEs) with multiple affordances for supporting self-regulated learning (SRL). This study was conducted within the framework of the TELEPEERS project where we wanted to identify TELEs that seemed to have a potential for supporting SRL. For the last ten years, our University has been deeply involved in research, innovation, and exploration of digital technologies for training (initial and continuous). Local, regional, national, European and international projects were conceived and developed, so that a very significant knowledge base exists today. Our study focuses on a course called ,Introduction to Algorithms and Programming' (NF01) which our University is offering and on the perception of different stakeholders (experts and students) of its affordances for supporting SRL. [source]


    Simple On-Line Scheduling Algorithms for All-Optical Broadcast-and-Select Networks

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2000
    Marco Ajmone Marsan
    This paper considers all-optical broadcast networks providing a number of slotted WDM channels for packet communications. Each network user is equipped with one tunable transmitter and one fixed receiver, so that full connectivity can be achieved by tuning transmitters to the different wavelengths. Tuning times are not negligible with respect to the slot time. A centralized network controller allocates slots in a TDWDM frame according to (long-term) bandwidth requests issued by users. Simple on-line transparent scheduling strategies are proposed, which accommodate bandwidth requests when they are received (on-line approach), with the constraint of not affecting existing allocations when a new request is served (transparency). Strategies that attempt to allocate in contiguous slots all the transmissions of each source on one wavelength reduce overheads, are simple, and provide good performance. Even better performance can be achieved, at the cost of a modest complexity increase, when the transparency constraint is not strictly imposed, i.e., when a full re-allocation of existing connections is performed once in a while. [source]


    Development of an Expert System Shell Based on Genetic Algorithms for the Selection of the Energy Best Available Technologies and their Optimal Operating Conditions for the Process Industry

    EXPERT SYSTEMS, Issue 3 2001
    D.A. Manolas
    The development of genetic algorithms started almost three decades ago in an attempt to imitate the mechanics of natural systems. Since their inception, they have been applied successfully as optimization methods, and as expert systems, in many diverse applications. In this paper, a genetic-algorithm-based expert system shell is presented that, when combined with a proper database comprising the available energy-saving technologies for the process industry, is able to perform the following tasks: (a) identify the best available technologies (BATs) among the available ones for a given process industry, and (b) calculate their optimal design parameters in such a way that they comply with the energy requirements of the process. By the term BAT is meant the available energy-saving technology, among the existing ones in the market, that is the best for the case. [source]


    The Dialoguer: An Interactive Bilingual Interface to a Network Operating System

    EXPERT SYSTEMS, Issue 3 2001
    Emad Al-Shawakfa
    We have developed a bilingual interface to the Novell network operating system, called the Dialoguer. This system carries on a conversation with the user in Arabic or English or a combination of the two and attempts to help the user use the Novell network operating system. Learning to use an operating system is a major barrier in starting to use computers. There is no single standard for operating systems which makes it difficult for novice users to learn a new operating system. With the proliferation of client,server environments, users will eventually end up using one network operating system or another. These problems motivated our choice of an area to work in and they have made it easy to find real users to test our system. This system is both an expert system and a natural language interface. The system embodies expert knowledge of the operating system commands and of a large variety of plans that the user may want to carry out. The system also contains a natural language understanding component and a response generation component. The Dialoguer makes extensive use of case frame tables in both components. Algorithms for handling a bilingual dialogue are one of the important contributions of this paper along with the Arabic case frames. [source]


    Equations of state for basin geofluids: algorithm review and intercomparison for brines

    GEOFLUIDS (ELECTRONIC), Issue 4 2002
    J. J. Adams
    ABSTRACT Physical properties of formation waters in sedimentary basins can vary by more than 25% for density and by one order of magnitude for viscosity. Density differences may enhance or retard flow driven by other mechanisms and can initiate buoyancy-driven flow. For a given driving force, the flow rate and injectivity depend on viscosity and permeability. Thus, variations in the density and viscosity of formation waters may have or had a significant effect on the flow pattern in a sedimentary basin, with consequences for various basin processes. Therefore, it is critical to correctly estimate water properties at formation conditions for proper representation and interpretation of present flow systems, and for numerical simulations of basin evolution, hydrocarbon migration, ore genesis, and fate of injected fluids in sedimentary basins. Algorithms published over the years to calculate water density and viscosity as a function of temperature, pressure and salinity are based on empirical fitting of laboratory-measured properties of predominantly NaCl solutions, but also field brines. A review and comparison of various algorithms are presented here, both in terms of applicability range and estimates of density and viscosity. The paucity of measured formation-water properties at in situ conditions hinders a definitive conclusion regarding the validity of any of these algorithms. However, the comparison indicates the versatility of the various algorithms in various ranges of conditions found in sedimentary basins. The applicability of these algorithms to the density of formation waters in the Alberta Basin is also examined using a high-quality database of 4854 water analyses. Consideration is also given to the percentage of cations that are heavier than Na in the waters. [source]


    Evaluation of Three Algorithms to Identify Incident Breast Cancer in Medicare Claims Data

    HEALTH SERVICES RESEARCH, Issue 5 2007
    Heather T. Gold
    Objective. To test the validity of three published algorithms designed to identify incident breast cancer cases using recent inpatient, outpatient, and physician insurance claims data. Data. The Surveillance, Epidemiology, and End Results (SEER) registry data linked with Medicare physician, hospital, and outpatient claims data for breast cancer cases diagnosed from 1995 to 1998 and a 5 percent control sample of Medicare beneficiaries in SEER areas. Study Design. We evaluate the sensitivity and specificity of three algorithms applied to new data compared with original reported results. Algorithms use health insurance diagnosis and procedure claims codes to classify breast cancer cases, with SEER as the reference standard. We compare algorithms by age, stage, race, and SEER region, and explore via logistic regression whether adding demographic variables improves algorithm performance. Principal Findings. The sensitivity of two of three algorithms is significantly lower when applied to newer data, compared with sensitivity calculated during algorithm development (59 and 77.4 percent versus 90 and 80.2 percent, p<.00001). Sensitivity decreases as age increases, and false negative rates are higher for cases with in situ, metastatic, and unknown stage disease compared with localized or regional breast cancer. Substantial variation also exists by SEER registry. There was potential for improvement in algorithm performance when adding age, region, and race to an indicator variable for whether the algorithm determined a subject to be a breast cancer case (p<.00001). Conclusions. Differential sensitivity of the algorithms by SEER region and age likely reflects variation in practice patterns, because the algorithms rely on administrative procedure codes. Depending on the algorithm, 3,5 percent of subjects overall are misclassified in 1998. Misclassification disproportionately affects older women and those diagnosed with in situ, metastatic, or unknown-stage disease. Algorithms should be applied cautiously to insurance claims databases to assess health care utilization outside SEER-Medicare populations because of uneven misclassification of subgroups that may be understudied already. [source]


    Nonoperative imaging techniques in suspected biliary tract obstruction

    HPB, Issue 6 2006
    Frances Tse
    Abstract Evaluation of suspected biliary tract obstruction is a common clinical problem. Clinical data such as history, physical examination, and laboratory tests can accurately identify up to 90% of patients whose jaundice is caused by extrahepatic obstruction. However, complete assessment of extrahepatic obstruction often requires the use of various imaging modalities to confirm the presence, level, and cause of obstruction, and to aid in treatment plan. In the present summary, the literature on competing technologies including endoscopic retrograde cholangiopancreatography (ERCP), percutaneous transhepatic cholangiopancreatography (PTC), endoscopic ultrasound (EUS), intraductal ultrasonography (IDUS), magnetic resonance cholangiopancreatography (MRCP), helical CT (hCT) and helical CT cholangiography (hCTC) with regards to diagnostic performance characteristics, technical success, safety, and cost-effectiveness is reviewed. Patients with obstructive jaundice secondary to choledocholithiasis or pancreaticobiliary malignancies are the primary focus of this review. Algorithms for the management of suspected obstructive jaundice are put forward based on current evidence. Published data suggest an increasing role for EUS and other noninvasive imaging techniques such as MRCP, and hCT following an initial transabdominal ultrasound in the assessment of patients with suspected biliary obstruction to select candidates for surgery or therapeutic ERCP. The management of patients with a suspected pancreaticobiliary condition ultimately is dependent on local expertise, availability, cost, and the multidisciplinary collaboration between radiologists, surgeons, and gastroenterologists. [source]


    Simulation Monte Carlo methods in extended stochastic volatility models

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 2 2002
    Miroslav, imandl
    A new technique for nonlinear state and parameter estimation of discrete time stochastic volatility models is developed. Algorithms of Gibbs sampler and simulation filters are used to construct a simulation tool that reflects both inherent model variability and parameter uncertainty. The proposed chain converges to equilibrium enabling the estimation of unobserved volatilities and unknown model parameter distributions. The estimation algorithm is illustrated using numerical examples. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Broken symmetry approach and chemical susceptibility of carbon nanotubes

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 8 2010
    Elena F. Sheka
    Abstract Constituting a part of odd electrons that are excluded from the covalent bonding, effectively unpaired electrons are posed by the singlet instability of the single-determinant broken spin-symmetry unrestricted Hartree,Fock (UBS HF) SCF solution. The correct determination of the total number of effectively unpaired electrons ND and its fraction on each NDĄ atom is well provided by the UBS HF solution. The NDĄ value is offered to be a quantifier of atomic chemical susceptibility (or equivalently, reactivity) thus highlighting targets that are the most favorable for addition reactions of any type. The approach is illustrated for two families involving fragments of arm-chair (n,n) and zigzag (m,0) single-walled nanotubes different by the length and end structure. Short and long tubes as well as tubes with capped end and open end, in the latter case, both hydrogen terminated and empty, are considered. Algorithms of the quantitative description of any length tubes are suggested. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2010 [source]