Commercial Software (commercial + software)

Distribution by Scientific Domains


Selected Abstracts


Explicit calculation of smoothed sensitivity coefficients for linear problems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2003
R. A. Bia, ecki
Abstract A technique of explicit calculation of sensitivity coefficients based on the approximation of the retrieved function by a linear combination of trial functions of compact support is presented. The method is applicable to steady state and transient linear inverse problems where unknown distributions of boundary fluxes, temperatures, initial conditions or source terms are retrieved. The sensitivity coefficients are obtained by solving a sequence of boundary value problems with boundary conditions and source term being homogeneous except for one term. This inhomogeneous term is taken as subsequent trial functions. Depending on the type of the retrieved function, it may appear on boundary conditions (Dirichlet or Neumann), initial conditions or the source term. Commercial software and analytic techniques can be used to solve this sequence of boundary value problems producing the required sensitivity coefficients. The choice of the approximating functions guarantees a filtration of the high frequency errors. Several numerical examples are included where the sensitivity coefficients are used to retrieve the unknown values of boundary fluxes in transient state and volumetric sources. Analytic, boundary-element and finite-element techniques are employed in the study. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Branch-and-Price Methods for Prescribing Profitable Upgrades of High-Technology Products with Stochastic Demands*

DECISION SCIENCES, Issue 1 2004
Purushothaman Damodaran
ABSTRACT This paper develops a model that can be used as a decision support aid, helping manufacturers make profitable decisions in upgrading the features of a family of high-technology products over its life cycle. The model integrates various organizations in the enterprise: product design, marketing, manufacturing, production planning, and supply chain management. Customer demand is assumed random and this uncertainty is addressed using scenario analysis. A branch-and-price (B&P) solution approach is devised to optimize the stochastic problem effectively. Sets of random instances are generated to evaluate the effectiveness of our solution approach in comparison with that of commercial software on the basis of run time. Computational results indicate that our approach outperforms commercial software on all of our test problems and is capable of solving practical problems in reasonable run time. We present several examples to demonstrate how managers can use our models to answer "what if" questions. [source]


The perturbation method and the extended finite element method.

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 8 2006
An application to fracture mechanics problems
ABSTRACT The extended finite element method has been successful in the numerical simulation of fracture mechanics problems. With this methodology, different to the conventional finite element method, discretization of the domain with a mesh adapted to the geometry of the discontinuity is not required. On the other hand, in traditional fracture mechanics all variables have been considered to be deterministic (uniquely defined by a given numerical value). However, the uncertainty associated with these variables (external loads, geometry and material properties, among others) it is well known. This paper presents a novel application of the perturbation method along with the extended finite element method to treat these uncertainties. The methodology has been implemented in a commercial software and results are compared with those obtained by means of a Monte Carlo simulation. [source]


Addressing non-uniqueness in linearized multichannel surface wave inversion

GEOPHYSICAL PROSPECTING, Issue 1 2009
Michele Cercato
ABSTRACT The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh-wave phase-velocity dispersion curves to estimate the shear-wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ,local' or linearized inversion strategies. Among linearized inversion algorithms, least-squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective-velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross-hole S-wave velocity measurements are blind-compared with the results of the inversion process. [source]


Design spaces, measures and metrics for evaluating quality of time operators and consequences leading to improved algorithms by design,illustration to structural dynamics

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 14 2005
X. Zhou
Abstract For the first time, for time discretized operators, we describe and articulate the importance and notion of design spaces and algorithmic measures that not only can provide new avenues for improved algorithms by design, but also can distinguish in general, the quality of computational algorithms for time-dependent problems; the particular emphasis is on structural dynamics applications for the purpose of illustration and demonstration of the basic concepts (the underlying concepts can be extended to other disciplines as well). For further developments in time discretized operators and/or for evaluating existing methods, from the established measures for computational algorithms, the conclusion that the most effective (in the sense of convergence, namely, the stability and accuracy, and complexity, namely, the algorithmic formulation and algorithmic structure) computational algorithm should appear in a certain algorithmic structure of the design space amongst comparable algorithms is drawn. With this conclusion, and also with the notion of providing new avenues leading to improved algorithms by design, as an illustration, a novel computational algorithm which departs from the traditional paradigm (in the sense of LMS methods with which we are mostly familiar with and widely used in commercial software) is particularly designed into the perspective design space representation of comparable algorithms, and is termed here as the forward displacement non-linearly explicit L-stable (FDEL) algorithm which is unconditionally consistent and does not require non-linear iterations within each time step. From the established measures for comparable algorithms, simply for illustration purposes, the resulting design of the FDEL formulation is then compared with the commonly advocated explicit central difference method and the implicit Newmark average acceleration method (alternately, the same conclusion holds true against controllable numerically dissipative algorithms) which pertain to the class of linear multi-step (LMS) methods for assessing both linear and non-linear dynamic cases. The conclusions that the proposed new design of the FDEL algorithm which is a direct consequence of the present notion of design spaces and measures, is the most effective algorithm to-date to our knowledge in comparison to the class of second-order accurate algorithms pertaining to LMS methods for routine and general non-linear dynamic situations is finally drawn through rigorous numerical experiments. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Numerical simulation of dense gas flows on unstructured grids with an implicit high resolution upwind Euler solver

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2004
P. Colonna
Abstract The study of the dense gas flows which occur in many technological applications demands for fluid dynamic simulation tools incorporating complex thermodynamic models that are not usually available in commercial software. Moreover, the software mentioned can be used to study very interesting phenomena that usually go under the name of ,non-classical gasdynamics', which are theoretically predicted for high molecular weight fluids in the superheated region, close to saturation. This paper presents the numerical methods and models implemented in a computer code named zFlow which is capable of simulating inviscid dense gas flows in complex geometries. A detailed description of the space discretization method used to approximate the Euler equations on unstructured grids and for general equations of state, and a summary of the thermodynamic functions required by the mentioned formulation are also given. The performance of the code is demonstrated by presenting two applications, the calculation of the transonic flow around an airfoil computed with both the ideal gas and a complex equation of state and the simulation of the non-classical phenomena occurring in a supersonic flow between two staggered sinusoidal blades. Non-classical effects are simulated in a supersonic flow of a siloxane using a Peng,Robinson-type equation of state. Siloxanes are a class of substances used as working fluids in organic Rankine cycles turbines. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Cell proliferation and cell cycle control: a mini review

INTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 12 2004
C.H. Golias
Summary Tumourigenesis is the result of cell cycle disorganisation, leading to an uncontrolled cellular proliferation. Specific cellular processes-mechanisms that control cell cycle progression and checkpoint traversation through the intermitotic phases are deregulated. Normally, these events are highly conserved due to the existence of conservatory mechanisms and molecules such as cell cycle genes and their products: cyclins, cyclin dependent kinases (Cdks), Cdk inhibitors (CKI) and extra cellular factors (i.e. growth factors). Revolutionary techniques using laser cytometry and commercial software are available to quantify and evaluate cell cycle processes and cellular growth. S-phase fraction measurements, including ploidy values, using histograms and estimation of indices such as the mitotic index and tumour-doubling time indices, provide adequate information to the clinician to evaluate tumour aggressiveness, prognosis and the strategies for radiotherapy and chemotherapy in experimental researches. [source]


Transient thermal modelling of heat recovery steam generators in combined cycle power plants

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 11 2007
Sepehr Sanaye
Abstract Heat recovery steam generator (HRSG) is a major component of a combined cycle power plant (CCPP). This equipment is particularly subject to severe thermal stress especially during cold start-up period. Hence, it is important to predict the operational parameters of HRSGs such as temperature of steam, water, hot gas and tube metal of heating elements as well as pressure change in drums during transient and steady-state operation. These parameters may be used for estimating thermal and mechanical stresses which are important in HRSG design and operation. In this paper, the results of a developed thermal model for predicting the working conditions of HRSG elements during transient and steady-state operations are reported. The model is capable of analysing arbitrary number of pressure levels and any number of elements such as superheater, evaporator, economizer, deaerator, desuperheater, reheater, as well as duct burners. To assess the correct performance of the developed model two kinds of data verification were performed. In the first kind of data verification, the program output was compared with the measured data collected from a cold start-up of an HRSG at Tehran CCPP. The variations of gas, water/steam and metal temperatures at various sections of HRSG, and pressure in drums were among the studied parameters. Mean differences of about 3.8% for temperature and about 9.2% for pressure were observed in this data comparison. In the second kind of data verification, the steady-state numerical output of the model was checked with the output of the well-known commercial software. An average difference of about 1.5% was found between the two latter groups of data. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A decision support tool for irrigation infrastructure investments,

IRRIGATION AND DRAINAGE, Issue 4 2010
Shahbaz Khan
outil d'aide à la décision; gestion de l'eau; investissements saisonniers et à long terme; optimisation; simulation; analyse coûts-avantages; ensemble de l'exploitation; négociation de l'eau; économie d'eau Abstract Increasing water scarcity, climate change and pressure to provide water for environmental flows urge irrigators to be more efficient. In Australia, ongoing water reforms and most recently the National Water Security Plan offer incentives to irrigators to adjust their farming practices by adopting water-saving irrigation infrastructures to match soil, crop and climatic conditions. WaterWorks is a decision support tool to facilitate irrigators to make long- and short-term irrigation infrastructure investment decisions at the farm level. It helps irrigators to improve the economic efficiency, water use efficiency and environmental performance of their farm businesses. WaterWorks has been tested, validated and accepted by the irrigation community and researchers in NSW, Australia. The interface of WaterWorks is user-friendly and flexible. The simulation and optimisation module in WaterWorks provides an opportunity to evaluate infrastructure investment decisions to suit their seasonal or long-term water availability. The sensitivity analysis allows substantiation of the impact of major variables. Net present value, internal rate of return, benefit,cost ratio and payback period are used to analyse the costs and benefits of modern irrigation technology. Application of WaterWorks using a whole farm-level case study indicates its effectiveness in making long- and short-term investment decisions. WaterWorks can be easily integrated into commercial software such as spreadsheets, GIS, real-time data acquisition and control systems to further enhance its usability. WaterWorks can also be used in regional development planning. Copyright © 2009 John Wiley & Sons, Ltd. L'augmentation de la rareté de l'eau, le changement climatique et la pression pour fournir de l'eau pour l'environnement incitent les irrigants à être plus efficaces. En Australie les réformes en cours sur l'eau, et plus récemment le Plan National de Sécurité de l'Eau, incitent les irrigants à ajuster leurs pratiques agricoles par l'adoption d'infrastructures d'irrigation économisant l'eau pour s'adapter aux conditions de sols, de cultures et de climat. WaterWorks est un outil d'aide à la décision pour faciliter les décisions d'investissement à long terme et court terme au niveau de l'exploitation. Il aide les irrigants à améliorer l'efficacité économique, l'efficacité de l'utilisation de l'eau et la performance environnementale de leurs exploitations agricoles. Le WaterWorks a été testé, validé et accepté par la communauté de l'irrigation dans le New South Wales, Australie. L'interface de WaterWorks est convivial et flexible. Le module de simulation et d'optimisation dans WaterWorks permet d'évaluer les décisions d'investissement en fonction de la disponibilité en eau saisonnière ou à long terme. L'analyse de sensibilité permet d'étayer l'impact des principales variables. La valeur actuelle nette, le taux de rendement interne, le ratio coûts-avantages et la période de récupération sont utilisés pour analyser les coûts et les avantages des technologies modernes d'irrigation. L'application de WaterWorks à une étude de cas complète au niveau de l'exploitation montre son efficacité pour les décisions d'investissement à long terme et court terme. Le WaterWorks peut être facilement intégré dans des logiciels commerciaux tels que les tableurs, les SIG, des systèmes d'acquisition de données en temps réel et de contrôle afin d'améliorer sa convivialité. Le WaterWorks peut également être utilisé pour la planification du développement régional. Copyright © 2009 John Wiley & Sons, Ltd. [source]


MONTE CARLO SIMULATION OF FAR INFRARED RADIATION HEAT TRANSFER: THEORETICAL APPROACH

JOURNAL OF FOOD PROCESS ENGINEERING, Issue 4 2006
F. TANAKA
ABSTRACT We developed radiation heat transfer models with the combination of the Monte Carlo (MC) method and computational fluid dynamic approach and two-dimensional heat transfer models based on the fundamental quantum physics of radiation and fluid dynamics. We investigated far infrared radiation (FIR) heating in laminar and buoyancy airflow. A simple prediction model in laminar airflow was tested with an analytical solution and commercial software (CFX 4). The adequate number of photon tracks for MC simulation was established. As for the complex designs model, the predicted results agreed well with the experimental data with root mean square error of 3.8 K. Because food safety public concerns are increasing, we applied this model to the prediction of the thermal inactivation level by coupling with the microbial kinetics model. Under buoyancy airflow condition, uniformity of FIR heating was improved by selecting adequate wall temperature and emissivity. [source]


In vivo measurements of T1 relaxation times in mouse brain associated with different modes of systemic administration of manganese chloride

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 4 2005
Yu-Ting Kuo MD
Abstract Purpose To measure regional T1 and T2 values for normal C57Bl/6 mouse brain and changes in T1 after systemic administration of manganese chloride (MnCl2) at 9.4 T. Materials and Methods C57Bl/6 mice were anesthetized and baseline T1 and T2 measurements obtained prior to measurement of T1 after administration of MnCl2 at 9.4 T. MnCl2 was administered systemically either by the intravenous (IV), intraperitoneal (IP), or subcutaneous (SC) routes. T1 and T2 maps for each MRI transverse slice were generated using commercial software, and T1 and T2 values of white matter (WM), gray matter (GM), pituitary gland, and lateral ventricle were obtained. Results When compared with baseline values at low-field, significant lengthening of the T1 values was shown at 9.4 T, while no significant change was seen for T2 values. Significant T1 shortening of the normal mouse brain was observed following IV, IP, and SC administration of MnCl2, with IV and IP showing similar acute effects. Significant decreases in T1 values were seen for the pituitary gland and the ventricles 15 minutes after either IV or IP injection. GM showed greater uptake of the contrast agent than WM at 15 and 45 minutes after either IV or IP injections. Although both structures are within the blood-brain barrier (BBB), GM and WM revealed a steady decrease in T1 values at 24 and 72 hours after MnCl2 injection regardless of the route of administration. Conclusion Systemic administration of MnCl2 by IV and IP routes induced similar time-course of T1 changes in different regions of the mouse brain. Acute effects of MnCl2 administration were mainly influenced by either the presence or absence of BBB. SC injection also provided significant T1 change at subacute stage after MnCl2 administration. J. Magn. Reson. Imaging 2005;21:334,339. © 2005 Wiley-Liss, Inc. [source]


Validation of theory of n -column separations with gas chromatograms predicted by commercial software

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 1 2007
Yuping Williamson
Abstract A probability theory for the average number of compounds resolved by the partial separation of complex mixtures on n columns was tested using commercial-software predictions of gas chromatograms. Such n -column separations are traditional means for addressing peak overlap, in which one chooses additional columns of different selectivity to separate compounds that cannot be separated by a single column. Gas chromatograms of five types of complex mixtures containing from 99 to 283 compounds were predicted for eight stationary phases using both optimized and other temperature programs. The number n of columns for different mixtures varied from 2 to 5. The numbers of compounds separated as singlet peaks at different resolution thresholds were compared to predictions, as evaluated with point-process statistical-overlap theory based on a Poisson distribution. A good agreement between theory and results was found in all cases corresponding to low saturation. Both good and poor agreements were found for cases corresponding to high saturation. A good agreement also was found for results based on resolving complex mixtures by a single column subject to two temperature programs. The moments and distribution of the number of resolved compounds were computed by Monte Carlo simulation, thus gauging the significance of departures between results and theory. The potential of such simulations to explore the limitations of theory was briefly investigated. [source]


Kinetics and Molecular Weight Development of Dithiolactone-Mediated Radical Polymerization of Styrene

MACROMOLECULAR REACTION ENGINEERING, Issue 4 2009
Jesús Guillermo Soriano-Moro
Abstract Calculations of polymerization kinetics and molecular weight development in the dithiolactone-mediated polymerization of styrene at 60,°C, using 2,2,-azobisisobutyronitrile (AIBN) as initiator and , -phenyl- , -butirodithiolactone (DTL1) as controller, are presented. The calculations were based on a polymerization mechanism based on the persistent radical effect, considering reverse addition only, implemented in the PREDICI® commercial software. Kinetic rate constants for the reverse addition step were estimated. The equilibrium constant (K,=,kadd/k -add) fell into the range of 105,106 L,·,mol,1. Fairly good agreement between model calculations and experimental data was obtained. [source]


Dual-frequency and dual-polarization multilayer microstrip antenna element

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 4 2004
Georgios Ch.
Abstract A dual-frequency and dual-linear polarization multilayer microstrip antenna element for array-antenna applications is presented. The performance of the element was computed using a commercial software based on the finite-element-method (FEM) algorithm. A prototype with dimensions based on the simulations was built and tested. Good agreement between the measured and numerical results is obtained. © 2004 Wiley Periodicals, Inc. Microwave Opt Technol Lett 42: 311,315, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.20288 [source]


Predicting the Number of Defects Remaining In Operational Software

NAVAL ENGINEERS JOURNAL, Issue 1 2001
P. J. Hartman Ph.D
ABSTRACT Software is becoming increasingly critical to the Fleet as more and more commercial off-the-shelf (COTS) programs are being introduced in operating systems and applications. Program managers need to specify, contract, and manage the development and integration of software for warfare systems, condition based monitoring, propulsion control, parts requisitions, and shipboard administration. The intention here is to describe the state-of-the-art in Software Reliability Engineering (SRE) and defect prediction for commercial and military programs. The information presented here is based on data from the commercial software industry and shipboard program development. The strengths and weaknesses of four failure models are compared using these cases. The Logarithmic Poisson Execution Time (LPET) model best fits the data and satisfied the fundamental principles of reliability theory. The paper presents the procedures for defining software failures, tracking defects, and making spreadsheet predictions of the defects still remaining in the software after it has been deployed. Rules-of-thumb for the number of defects in commercial software and the relative expense required to fix these errors are provided for perspective. [source]


Dynamic drift-diffusion simulation of InP/InGaAs SAGCM APD

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 5 2007
Y. G. Xiao
Abstract In this work, based on the advanced drift and diffusion model with commercial software, the Crosslight APSYS, InP/InGaAs separate absorption, grading, charge and multiplication APDs for high bit-rate operation have been modeled. Basic physical quantities such as band diagram, optical absorption and generation are calculated. Performance characteristics such as dark- and photo-current, photoresponsivity, multiplication gain, breakdown voltage, excess noise, frequency response and bandwidth etc., are simulated. The modeling results are selectively presented, analyzed, and some results are compared with experiments. Device design optimization issues are further discussed with respect to the applicable features of the Crosslight APSYS within the framework of the drift-diffusion theory. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Refined Rank Regression Method with Censors

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2004
Wendai Wang
Abstract Reliability engineers often face failure data with suspensions. The rank regression method with an approach introduced by Johnson has been commonly used to handle data with suspensions in engineering practice and commercial software. However, the Johnson method makes partial use of suspension information only,the positions of suspensions, not the exact times to suspensions. A new approach for rank regression with censored data is proposed in this paper, which makes full use of suspension information. Taking advantage of the parametric approach, the refined rank regression obtains the ,exact' mean order number for each failure point in the sample. With the ,exact' mean order number, the proposed method gives the ,best' fit to sample data for an assumed times-to-failure distribution. This refined rank regression is simple to implement and appears to have good statistical and convergence properties. An example is provided to illustrate the proposed method. Copyright © 2004 John Wiley & Sons, Ltd. [source]


2161: Development of a next-generation sequencing platform for retinal dystrophies, with LCA and RP as proof of concept

ACTA OPHTHALMOLOGICA, Issue 2010
F COPPIETERS
Purpose Retinal dystrophies represent an emerging group of hereditary disorders that lead to degeneration of the photoreceptors and/or the retinal pigment epithelium, resulting in irreversible blindness. They are genetically complex, with over 200 disease loci identified so far. Current genetic screening consists of microarray analysis (Asper Ophthalmics) for the most recurrent mutations, and subsequent Sanger sequencing. However, the high cost and low throughput of the latter technology limits testing to only the most recurrent genes. This project aims to develop a high throughput and cost-effective platform for screening of all known disease genes for Leber Congenital Amaurosis (LCA) and retinitis pigmentosa (RP), using the next-generation sequencing (NGS) technology. Methods A NGS panel will be developed for all 16 and 47 known LCA and RP genes, respectively, including coding and untranslated regions, regulatory regions and microRNA binding sites. The protocol will consist of the following steps: 1) high throughput primerdesign and qPCR, 2) ligation, 3) shearing and 4) sequencing on the Illumina Genome Analyser IIx (GAIIx). This innovative protocol overcomes the need for short amplicons in order to render short-read sequences by the GAIIx. This sequencing instrument was chosen because of its high capacity, low cost per base and the absence of interpretation problems at homopolymeric regions. Analysis of the variants will be performed using in-house developed and commercial software, which ranks all variants according to their pathogenic potential. Conclusion Using the proposed protocol, comprehensive screening for all known disease genes for LCA and RP will be available for the first time. [source]