Home About us Contact | |||
Exact
Kinds of Exact Terms modified by Exact Selected AbstractsExact and Robust (Self-)Intersections for Polygonal MeshesCOMPUTER GRAPHICS FORUM, Issue 2 2010Marcel Campen Abstract We present a new technique to implement operators that modify the topology of polygonal meshes at intersections and self-intersections. Depending on the modification strategy, this effectively results in operators for Boolean combinations or for the construction of outer hulls that are suited for mesh repair tasks and accurate mesh-based front tracking of deformable materials that split and merge. By combining an adaptive octree with nested binary space partitions (BSP), we can guarantee exactness (= correctness) and robustness (= completeness) of the algorithm while still achieving higher performance and less memory consumption than previous approaches. The efficiency and scalability in terms of runtime and memory is obtained by an operation localization scheme. We restrict the essential computations to those cells in the adaptive octree where intersections actually occur. Within those critical cells, we convert the input geometry into a plane-based BSP-representation which allows us to perform all computations exactly even with fixed precision arithmetics. We carefully analyze the precision requirements of the involved geometric data and predicates in order to guarantee correctness and show how minimal input mesh quantization can be used to safely rely on computations with standard floating point numbers. We properly evaluate our method with respect to precision, robustness, and efficiency. [source] Fast, Exact, Linear BooleansCOMPUTER GRAPHICS FORUM, Issue 5 2009Gilbert Bernstein Abstract We present a new system for robustly performing Boolean operations on linear, 3D polyhedra. Our system is exact, meaning that all internal numeric predicates are exactly decided in the sense of exact geometric computation. Our BSP-tree based system is 16-28× faster at performing iterative computations than CGAL's Nef Polyhedra based system, the current best practice in robust Boolean operations, while being only twice as slow as the non-robust modeler Maya. Meanwhile, we achieve a much smaller substrate of geometric subroutines than previous work, comprised of only 4 predicates, a convex polygon constructor, and a convex polygon splitting routine. The use of a BSP-tree based Boolean algorithm atop this substrate allows us to explicitly handle all geometric degeneracies without treating a large number of cases. [source] Exact results in a non-supersymmetric gauge theoryFORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 6-7 2004A. Armoni We consider non-supersymmetric large N orientifold field theories. Specifically, we discuss a gauge theory with a Dirac fermion in the anti-symmetric tensor representation. We argue that, at large N and in a large part of its bosonic sector, this theory is non-perturbatively equivalent to ,, = 1 SYM, so that exact results established in the latter (parent) theory also hold in the daughter orientifold theory. In particular, the non-supersymmetric theory has an exactly calculable bifermion condensate, exactly degenerate parity doublets, and a vanishing cosmological constant (all this to leading order in 1 / N). [source] Output-feedback co-ordinated decentralized adaptive tracking: The case of MIMO subsystems with delayed interconnectionsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2005Boris M. Mirkin Abstract Exact decentralized output-feedback Lyapunov-based designs of direct model reference adaptive control (MRAC) for linear interconnected delay systems with MIMO subsystems are introduced. The design process uses a co-ordinated decentralized structure of adaptive control with reference model co-ordination which requires an exchange of signals between the different reference models. It is shown that in the framework of the reference model co-ordination zero residual tracking error is possible, exactly as in the case with SISO subsystems. We develop decentralized MRAC on the base of a priori information about only the local subsystems gain frequency matrices without additional a priori knowledge about the full system gain frequency matrix. To achieve a better adaptation performance we propose proportional, integral time-delayed adaptation laws. The appropriate Lyapunov,Krasovskii type functional is suggested to design the update mechanism for the controller parameters, and in order to prove stability. Two different adaptive DMRAC schemes are proposed, being the first asymptotic exact zero tracking results for linear interconnected delay systems with MIMO subsystems. Copyright © 2005 John Wiley & Sons, Ltd. [source] Inverse filtering and deconvolutionINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2 2001Ali Saberi Abstract This paper studies the so-called inverse filtering and deconvolution problem from different angles. To start with, both exact and almost deconvolution problems are formulated, and the necessary and sufficient conditions for their solvability are investigated. Exact and almost deconvolution problems seek filters that can estimate the unknown inputs of the given plant or system either exactly or almostly whatever may be the unintended or disturbance inputs such as measurement noise, external disturbances, and model uncertainties that act on the system. As such they require strong solvability conditions. To alleviate this, several optimal and suboptimal deconvolution problems are formulated and studied. These problems seek filters that can estimate the unknown inputs of the given system either exactly, almostly or optimally in the absence of unintended (disturbance) inputs, and on the other hand, in the presence of unintended (disturbance) inputs, they seek that the influence of such disturbances on the estimation error be as small as possible in a certain norm (H2 or H,) sense. Both continuous- and discrete-time systems are considered. For discrete-time systems, the counter parts of all the above problems when an ,,-step delay in estimation is present are introduced and studied. Next, we focus on the exact and almost deconvolution but this time when the uncertainties in plant dynamics can be structurally modeled by a ,-block as a feedback element to the nominally known plant dynamics. This is done either in the presence or absence of external disturbances. Copyright © 2001 John Wiley & Sons, Ltd. [source] Expression of metalloproteinases and their tissue inhibitors in inflamed gingival biopsiesJOURNAL OF PERIODONTAL RESEARCH, Issue 5 2008L. D. R. Gonçalves Objectives:, Matrix metalloproteinases (MMPs) and their tissue inhibitors (TIMPs) are known to be involved in the periodontal disease process. Results of in vivo MMPs and TIMPs gene expressions in the gingiva, though, are still controversial. In the present study, we compared the gene expression of MMP-1, -2, -9, -13 and TIMP-1, -2 in healthy and inflamed gingiva. Methods:, 38 gingival samples were collected from gingivitis (n = 10), advanced chronic periodontitis (n = 10), generalized aggressive periodontitis (n = 8) and periodontally healthy individuals (n = 10). Total RNA isolated from those samples was subjected to reverse transcription followed by amplification by polymerase chain reaction (RT-PCR). Products were visualized in agarose gels and quantified by optical densitometry. Samples were also processed for gelatin zymography and Western blotting for MMP-2 and MMP-9 in order to assess for post-transcriptional MMP regulation at the protein level. Results:, The frequencies and levels of transcripts encoding MMPs and TIMPs were found to be not significantly different among groups (p > 0.05, Fisher's Exact and Kruskall-Wallis tests). There is a trend towards higher MMP-2 and -9 gelatinase activities in the inflamed samples, although not statistically significant. In contrast, zymography and Western blotting studies show that MMP-2 is virtually absent in the chronic periodontitis group. Conclusion:, These results could reflect a complex regulation of MMPs and TIMPs' gene expression in the course of gingival inflammation. They also reveal a great biological diversity even among individuals with similar periodontal status. [source] Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion)JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2006Alexandros Beskos Summary., The objective of the paper is to present a novel methodology for likelihood-based inference for discretely observed diffusions. We propose Monte Carlo methods, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation. [source] Maximum Likelihood Estimation for a First-Order Bifurcating Autoregressive Process with Exponential ErrorsJOURNAL OF TIME SERIES ANALYSIS, Issue 6 2005J. Zhou Abstract., Exact and asymptotic distributions of the maximum likelihood estimator of the autoregressive parameter in a first-order bifurcating autoregressive process with exponential innovations are derived. The limit distributions for the stationary, critical and explosive cases are unified via a single pivot using a random normalization. The pivot is shown to be asymptotically exponential for all values of the autoregressive parameter. [source] Exact expected values of variance estimators for simulationNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2007Tûba Aktaran-Kalayc Abstract We formulate exact expressions for the expected values of selected estimators of the variance parameter (that is, the sum of covariances at all lags) of a steady-state simulation output process. Given in terms of the autocovariance function of the process, these expressions are derived for variance estimators based on the simulation analysis methods of nonoverlapping batch means, overlapping batch means, and standardized time series. Comparing estimator performance in a first-order autoregressive process and the M/M/1 queue-waiting-time process, we find that certain standardized time series estimators outperform their competitors as the sample size becomes large. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007 [source] Exact and approximative algorithms for coloring G(n,p)RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2004Amin Coja-Oghlan We investigate the problem of coloring random graphs G(n, p) in polynomial expected time. For the case p , 1.01/n, we present an algorithm that finds an optimal coloring in linear expected time. For p , ln6(n)/n, we give algorithms which approximate the chromatic number within a factor of O( ). We also obtain an O(/ln(np))-approximation algorithm for the independence number. As an application, we propose an algorithm for deciding satisfiability of random 2k -SAT formulas over n propositional variables with , ln7(n)nk clauses in polynomial expected time. © 2004 Wiley Periodicals, Inc. Random Struct. Alg., 2004 [source] Exact, Distribution Free Confidence Intervals for Late Effects in Censored Matched PairsBIOMETRICAL JOURNAL, Issue 1 2009Shoshana R. Daniel Abstract When comparing censored survival times for matched treated and control subjects, a late effect on survival is one that does not begin to appear until some time has passed. In a study of provider specialty in the treatment of ovarian cancer, a late divergence in the Kaplan,Meier survival curves hinted at superior survival among patients of gynecological oncologists, who employ chemotherapy less intensively, when compared to patients of medical oncologists, who employ chemotherapy more intensively; we ask whether this late divergence should be taken seriously. Specifically, we develop exact, permutation tests, and exact confidence intervals formed by inverting the tests, for late effects in matched pairs subject to random but heterogeneous censoring. Unlike other exact confidence intervals with censored data, the proposed intervals do not require knowledge of censoring times for patients who die. Exact distributions are consequences of two results about signs, signed ranks, and their conditional independence properties. One test, the late effects sign test, has the binomial distribution; the other, the late effects signed rank test, uses nonstandard ranks but nonetheless has the same exact distribution as Wilcoxon's signed rank test. A simulation shows that the late effects signed rank test has substantially more power to detect late effects than do conventional tests. The confidence statement provides information about both the timing and magnitude of late effects (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Fast, Exact, Linear BooleansCOMPUTER GRAPHICS FORUM, Issue 5 2009Gilbert Bernstein Abstract We present a new system for robustly performing Boolean operations on linear, 3D polyhedra. Our system is exact, meaning that all internal numeric predicates are exactly decided in the sense of exact geometric computation. Our BSP-tree based system is 16-28× faster at performing iterative computations than CGAL's Nef Polyhedra based system, the current best practice in robust Boolean operations, while being only twice as slow as the non-robust modeler Maya. Meanwhile, we achieve a much smaller substrate of geometric subroutines than previous work, comprised of only 4 predicates, a convex polygon constructor, and a convex polygon splitting routine. The use of a BSP-tree based Boolean algorithm atop this substrate allows us to explicitly handle all geometric degeneracies without treating a large number of cases. [source] Out-of-core compression and decompression of large n -dimensional scalar fieldsCOMPUTER GRAPHICS FORUM, Issue 3 2003Lawrence Ibarria We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularlyattractive when the entire data set does not fit in memory and when the sampling rate is high relative to thefeature size of the scalar field in all dimensions. Although we report results foranddata sets, the proposedapproach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introducedhere, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predictedvalues are exact when the n-dimensional scalar field is an implicit polynomial of degreen, 1. Surprisingly,when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding,the proposed method often outperforms wavelet compression in anL,sense. The proposed approach may beused both for lossy and lossless compression and is well suited for out-of-core compression and decompression,because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only asmall buffer in core memory, whose size barely exceeds a single (n,1)- dimensional slice of the data. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Compression, scalar fields,out-of-core. [source] A Prospective Comparison of Ultrasound-guided and Blindly Placed Radial Arterial CathetersACADEMIC EMERGENCY MEDICINE, Issue 12 2006Stephen Shiver MD Abstract Background Arterial cannulation for continuous blood-pressure measurement and frequent arterial-blood sampling commonly are required in critically ill patients. Objectives To compare ultrasound (US)-guided versus traditional palpation placement of arterial lines for time to placement, number of attempts, sites used, and complications. Methods This was a prospective, randomized interventional study at a Level 1 academic urban emergency department with an annual census of 78,000 patients. Patients were randomized to either palpation or US-guided groups. Inclusion criteria were any adult patient who required an arterial line according to the treating attending. Patients who had previous attempts at an arterial line during the visit, or who could not be randomized because of time constraints, were excluded. Enrollment was on a convenience basis, during hours worked by researchers over a six-month period. Patients in either group who had three failed attempts were rescued with the other technique for patient comfort. Statistical analysis included Fisher's exact, Mann-Whitney, and Student's t-tests. Results Sixty patients were enrolled, with 30 patients randomized to each group. Patients randomized to the US group had a shorter time required for arterial line placement (107 vs. 314 seconds; difference, 207 seconds; p = 0.0004), fewer placement attempts (1.2 vs. 2.2; difference, 1; p = 0.001), and fewer sites required for successful line placement (1.1 vs. 1.6; difference, 0.5; p = 0.001), as compared with the palpation group. Conclusions In this study, US guidance for arterial cannulation was successful more frequently and it took less time to establish the arterial line as compared with the palpation method. [source] Verification of the 2D Tokamak Edge Modelling Codes for Conditions of Detached Divertor PlasmaCONTRIBUTIONS TO PLASMA PHYSICS, Issue 3-5 2010V. Kotov Abstract The paper discusses verification of the ITER edge modelling code SOLPS 4.3 (B2-EIRENE). Results of the benchmark against SOLPS 5.0 are shown for standard JET test cases. Special two-point formulas are employed in SOLPS 4.3 to analyze the results of numerical simulations. The applied relations are exact in frame of the equations solved by the B2 code. This enables simultaneous check of the parallel momentum and energy balances and boundary conditions. Transition to divertor detachment is analyzed quantitatively as it appears in the simulations in terms of the coupled momentum and energy balance (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Approximate analysis methods for asymmetric plan base-isolated buildingsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2002Keri L. Ryan Abstract An approximate method for linear analysis of asymmetric-plan, multistorey buildings is specialized for a single-storey, base-isolated structure. To find the mode shapes of the torsionally coupled system, the Rayleigh,Ritz procedure is applied using the torsionally uncoupled modes as Ritz vectors. This approach reduces to analysis of two single-storey systems, each with vibration properties and eccentricities (labelled ,effective eccentricities') similar to corresponding properties of the isolation system or the fixed-base structure. With certain assumptions, the vibration properties of the coupled system can be expressed explicitly in terms of these single-storey system properties. Three different methods are developed: the first is a direct application of the Rayleigh,Ritz procedure; the second and third use simplifications for the effective eccentricities, assuming a relatively stiff superstructure. The accuracy of these proposed methods and the rigid structure method in determining responses are assessed for a range of system parameters including eccentricity and structure flexibility. For a subset of systems with equal isolation and structural eccentricities, two of the methods are exact and the third is sufficiently accurate; all three are preferred to the rigid structure method. For systems with zero isolation eccentricity, however, all approximate methods considered are inconsistent and should be applied with caution, only to systems with small structural eccentricities or stiff structures. Copyright © 2001 John Wiley & Sons, Ltd. [source] On-line identification of non-linear hysteretic structural systems using a variable trace approachEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2001Jeng-Wen Lin Abstract In this paper, an adaptive on-line parametric identification algorithm based on the variable trace approach is presented for the identification of non-linear hysteretic structures. At each time step, this recursive least-square-based algorithm upgrades the diagonal elements of the adaptation gain matrix by comparing the values of estimated parameters between two consecutive time steps. Such an approach will enforce a smooth convergence of the parameter values, a fast tracking of the parameter changes and will remain adaptive as time progresses. The effectiveness and efficiency of the proposed algorithm is shown by considering the effects of excitation amplitude, of the measurement units, of larger sampling time interval and of measurement noise. The cases of exact-, under-, over-parameterization of the structural model have been analysed. The proposed algorithm is also quite effective in identifying time-varying structural parameters to simulate cumulative damage in structural systems. Copyright © 2001 John Wiley & Sons, Ltd. [source] Effects of predator-induced visual and olfactory cues on 0+ perch (Perca fluviatilis L.) foraging behaviourECOLOGY OF FRESHWATER FISH, Issue 2 2006V. N. Mikheev Abstract,,, Foraging juvenile fish with relatively high food demands are usually vulnerable to various aquatic and avian predators. To compromise between foraging and antipredator activity, they need exact and reliable information about current predation risk. Among direct predator-induced cues, visual and olfactory signals are considered to be most important. Food intake rates and prey-size selectivity of laboratory-reared, naive young-of-the-year (YOY) perch, Perca fluviatilis, were studied in experiments with Daphnia magna of two size classes: 2.8 and 1.3 mm as prey and northern pike, Esox lucius, as predator. Neither total intake rate nor prey-size selectivity was modified by predator kairomones alone (water from an aquarium with a pike was pumped into the test aquaria) under daylight conditions. Visual presentation of pike reduced total food intake by perch. This effect was significantly more pronounced (synergistic) when visual and olfactory cues were presented simultaneously to foraging perch. Moreover, the combination of cues caused a significant shift in prey-size selection, expressed as a reduced proportion of large prey in the diet. Our observations demonstrate that predator-induced olfactory cues alone are less important modifiers of the feeding behaviour of naive YOY perch than visual cues under daylight conditions. However, pike odour acts as a modulatory stimulus enhancing the effects of visual cues, which trigger an innate response in perch. [source] Allowing for redundancy and environmental effects in estimates of home range utilization distributionsENVIRONMETRICS, Issue 1 2005W. G. S. Hines Abstract Real location data for radio tagged animals can be challenging to analyze. They can be somewhat redundant, since successive observations of an animal slowly wandering through its environment may well show very similar locations. The data set can possess trends over time or be irregularly timed, and they can report locations in environments with features that should be incorporated to some degree. Also, the periods of observation may be too short to provide reliable estimates of characteristics such as inter-observation correlation levels that can be used in conventional time-series analyses. Moreover, stationarity (in the sense of the data being generated by a source that provides observations of constant mean, variance and correlation structure) may not be present. This article considers an adaptation of the kernel density estimator for estimating home ranges, an adaptation which allows for these various complications and which works well in the absence of exact (or precise) information about correlation structure and parameters. Modifications to allow for irregularly timed observations, non-stationarity and heterogeneous environments are discussed and illustrated. Copyright © 2004 John Wiley & Sons, Ltd. [source] PECTIVE: HERE'S TO FISHER, ADDITIVE GENETIC VARIANCE, AND THE FUNDAMENTALTHEOREM OF NATURAL SELECTIONEVOLUTION, Issue 7 2002James F. Crow Abstract Fisher's fundamental theorem of natural selection, that the rate of change of fitness is given by the additive genetic variance of fitness, has generated much discussion since its appearance in 1930. Fisher tried to capture in the formula the change in population fitness attributable to changes of allele frequencies, when all else is not included. Lessar's formulation comes closest to Fisher's intention, as well as this can be judged. Additional terms can be added to account for other changes. The "theorem" as stated by Fisher is not exact, and therefore not a theorem, but it does encapsulate a great deal of evolutionary meaning in a simple statement. I also discuss the effectiveness of reproductive-value weighting and the theorem in integrated form. Finally, an optimum principle, analogous to least action and Hamilton's principle in physics, is discussed. [source] Somatic loss of wild type NF1 allele in neurofibromas: Comparison of NF1 microdeletion and non-microdeletion patientsGENES, CHROMOSOMES AND CANCER, Issue 10 2006Thomas De Raedt Neurofibromatosis type I (NF1) is an autosomal dominant familial tumor syndrome characterized by the presence of multiple benign neurofibromas. In 95% of NF1 individuals, a mutation is found in the NF1 gene, and in 5% of the patients, the germline mutation consists of a microdeletion that includes the NF1 gene and several flanking genes. We studied the frequency of loss of heterozygosity (LOH) in the NF1 region as a mechanism of somatic NF1 inactivation in neurofibromas from NF1 patients with and without a microdeletion. There was a statistically significant difference between these two patient groups in the proportion of neurofibromas with LOH. None of the 40 neurofibromas from six different NF1 microdeletion patients showed LOH, whereas LOH was observed in 6/28 neurofibromas from five patients with an intragenic NF1 mutation (P = 0.0034, Fisher's exact). LOH of the NF1 microdeletion region in NF1 microdeletion patients would de facto lead to a nullizygous state of the genes located in the deletion region and might be lethal. The mechanisms leading to LOH were further analyzed in six neurofibromas. In two out of six neurofibromas, a chromosomal microdeletion was found; in three, a mitotic recombination was responsible for the observed LOH; and in one, a chromosome loss with reduplication was present. These data show an important difference in the mechanisms of second hit formation in the 2 NF1 patient groups. We conclude that NF1 is a familial tumor syndrome in which the type of germline mutation influences the type of second hit in the tumors. © 2006 Wiley-Liss, Inc. [source] Surface deformation due to loading of a layered elastic half-space: a rapid numerical kernel based on a circular loading elementGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007E. Pan SUMMARY This study is motivated by a desire to develop a fast numerical algorithm for computing the surface deformation field induced by surface pressure loading on a layered, isotropic, elastic half-space. The approach that we pursue here is based on a circular loading element. That is, an arbitrary surface pressure field applied within a finite surface domain will be represented by a large number of circular loading elements, all with the same radius, in which the applied downwards pressure (normal stress) is piecewise uniform: that is, the load within each individual circle is laterally uniform. The key practical requirement associated with this approach is that we need to be able to solve for the displacement field due to a single circular load, at very large numbers of points (or ,stations'), at very low computational cost. This elemental problem is axisymmetric, and so the displacement vector field consists of radial and vertical components both of which are functions only of the radial coordinate r. We achieve high computational speeds using a novel two-stage approach that we call the sparse evaluation and massive interpolation (SEMI) method. First, we use a high accuracy but computationally expensive method to compute the displacement vectors at a limited number of r values (called control points or knots), and then we use a variety of fast interpolation methods to determine the displacements at much larger numbers of intervening points. The accurate solutions achieved at the control points are framed in terms of cylindrical vector functions, Hankel transforms and propagator matrices. Adaptive Gauss quadrature is used to handle the oscillatory nature of the integrands in an optimal manner. To extend these exact solutions via interpolation we divide the r -axis into three zones, and employ a different interpolation algorithm in each zone. The magnitude of the errors associated with the interpolation is controlled by the number, M, of control points. For M= 54, the maximum RMS relative error associated with the SEMI method is less than 0.2 per cent, and it is possible to evaluate the displacement field at 100 000 stations about 1200 times faster than if the direct (exact) solution was evaluated at each station; for M= 99 which corresponds to a maximum RMS relative error less than 0.03 per cent, the SEMI method is about 700 times faster than the direct solution. [source] An Analytical Solution for Ground Water Transit Time through Unconfined AquifersGROUND WATER, Issue 4 2005R. Chesnaux An exact, closed-form analytical solution is developed for calculating ground water transit times within Dupuit-type flow systems. The solution applies to steady-state, saturated flow through an unconfined, horizontal aquifer recharged by surface infiltration and discharging to a downgradient fixed-head boundary. The upgradient boundary can represent, using the same equation, a no-flow boundary or a fixed head. The approach is unique for calculating travel times because it makes no a priori assumptions regarding the limit of the water table rise with respect to the minimum saturated aquifer thickness. The computed travel times are verified against a numerical model, and examples are provided, which show that the predicted travel times can be on the order of nine times longer relative to existing analytical solutions. [source] Bayesian estimation of financial modelsACCOUNTING & FINANCE, Issue 2 2002Philip Gray This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum,likelihood,based asymptotic distributions have poor finite,sampleproperties. [source] Productivity,quality,costs,safety: A sustained approach to competitive advantage,a systematic review of the national safety council's case studies in safety and productivityHUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2008Tushyati Maudgalya The marked improvement in workplace safety levels in the past few decades has resulted in companies experiencing fewer safety accidents than before, thus making it less effective to argue that money spent on workplace safety and on injury prevention will yield much bottom-line benefit. To make a compelling business case for workplace safety investment, one must link safety objectives to other business objectives. The objective of this study is to determine whether workplace safety as a business objective adds value to the business bottom line. This research reviews published case studies to determine if there is a relationship between safety initiatives and increased productivity, quality, and cost efficiencies. Eighteen case studies (17 published by the National Safety Council) were analyzed using the Workplace Safety Intervention Appraisal Instrument. The appraisal scores ranged from 0.55 to 1.27, with an average of 0.91. The case studies were relatively strong in the Evidence Reporting and Data Analysis categories, as compared to the Subject Selection, Observation Quality, and Generalization to Other Populations categories. Following workplace safety initiatives, the studies revealed an average increase of 66% (2%,104%) in productivity, 44% (4%,73%) in quality, 82% (52%,100%) in safety records, and 71% (38%,100%) in cost benefits. In a few reported cases, it took only 8 months to obtain a payback in terms of monetary investment in the safety initiative. Although the studies did display a correlation between safety, productivity, and quality, there is insufficient evidence to categorically state that the improvements in productivity, quality, and cost efficiency were brought about by the introduction of an organization-wide safety culture. Notwithstanding, there is demonstrable evidence to indicate that safety as a business objective can assist an organization in achieving the long-term benefit of operational sustainability, that is, achieve a long-term competitive advantage by balancing business costs against social costs. Further research is required to conclusively prove the exact (possibly quantifiable) impact of safety investment on increased productivity, quality, and cost efficiency. © 2008 Wiley Periodicals, Inc. [source] Anti-Homosexual and Gay: Rereading SartreHYPATIA, Issue 1 2007CHRISTINE PIERCE Jean-Paul Sartre's questions about anti-Semitism in Anti-Semite and Jew are ones we should want asked about heteronormativity,what causes it, what sustains it, why is so little being done about it, what should be done. Although the parallels between anti-Semitism and heteronormativity are not exact, relevant Sartrian ideas include nationalism, choosing to reason falsely, living in the future, and authenticity. Foremost is Sartre's claim that bigotry is not about ideas but a certain type of personality. [source] Comparison of defects in ProTaper hand-operated and engine-driven instruments after clinical useINTERNATIONAL ENDODONTIC JOURNAL, Issue 3 2007G. S. P. Cheung Abstract Aim, To compare the type of defects and mode of material failure of engine-driven and hand-operated ProTaper instruments after clinical use. Methodology, A total of 401 hand-operated and 325 engine-driven ProTaper instruments were discarded from an endodontic clinic over 17 months. Those that had fractured were examined for plastic deformation in lateral view and remounted for fractographical examination in scanning electron microscope. The mode of fracture was classified as ,fatigue' or ,shear' failure. The lengths of fractured segments in both instruments were recorded. Any distortion in hand instrument was noted. Data were analysed using chi-square, Fisher's exact or Student's t -test, where appropriate. Results, Approximately 14% of all discarded hand-operated instruments and 14% of engine-driven instruments were fractured. About 62% of hand instruments failed because of shear fracture, compared with approximately 66% of engine-driven instruments as a result of fatigue (P < 0.05). Approximately 16% of hand instruments were affected by shear, and either remained intact or was fractured, compared with 5% of engine-driven instruments (P < 0.05). The length of the broken fragment was significantly shorter in hand versus engine-driven group (P < 0.05). Approximately 7% of hand instruments were discarded intact but distorted (rarely for engine-driven instruments); all were in the form of unscrewing of the flutes. The location of defects in hand Finishing instruments was significantly closer to the tip than that for Shaping instruments (P < 0.05). Conclusions, Under the conditions of this study (possibly high usage), the failure mode of ProTaper engine-driven and hand-operated instruments appeared to be different, with shear failure being more prevalent in the latter. [source] The response of an elastic half-space under a momentary shear line impulseINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2003Moche Ziv Abstract The response of an ideal elastic half-space to a line-concentrated impulsive vector shear force applied momentarily is obtained by an analytical,numerical computational method based on the theory of characteristics in conjunction with kinematical relations derived across surfaces of strong discontinuities. The shear force is concentrated along an infinite line, drawn on the surface of the half-space, while being normal to that line as well as to the axis of symmetry of the half-space. An exact loading model is introduced and built into the computational method for this shear force. With this model, a compatibility exists among the prescribed applied force, the geometric decay of the shear stress component at the precursor shear wave, and the boundary conditions of the half-space; in this sense, the source configuration is exact. For the transient boundary-value problem described above, a wave characteristics formulation is presented, where its differential equations are extended to allow for strong discontinuities which occur in the material motion of the half-space. A numerical integration of these extended differential equations is then carried out in a three-dimensional spatiotemporal wavegrid formed by the Cartesian bicharacteristic curves of the wave characteristics formulation. This work is devoted to the construction of the computational method and to the concepts involved therein, whereas the interpretation of the resultant transient deformation of the half-space is presented in a subsequent paper. Copyright © 2003 John Wiley & Sons, Ltd. [source] Simulation of special loading conditions by means of non-linear constraints imposed through Lagrange multipliersINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2002M. A. Gutiérrez Abstract This paper discusses the necessity and handling of non-linear constraint equations to describe the behaviour of properties of the loading system such as, e.g. smooth free-rotating loading platens. An exact, non-linear formulation for a smooth loading platen is derived and its incorporation into the equilibrium equations is presented. For this purpose, the Lagrange multiplier method is used. The solution of the equilibrium equations by means of a Newton,Raphson algorithm is also outlined. The proposed approach is validated on a patch of two finite elements and applied to a compression-bending test on a pre-notched specimen. It is observed that use of a linearized approximation of the boundary constraint can lead to errors in the description of the motion of the constrained nodes. Thus, the non-linear formulation is preferable. Copyright © 2002 John Wiley & Sons, Ltd. [source] Linear random vibration by stochastic reduced-order modelsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2010Mircea Grigoriu Abstract A practical method is developed for calculating statistics of the states of linear dynamic systems with deterministic properties subjected to non-Gaussian noise and systems with uncertain properties subjected to Gaussian and non-Gaussian noise. These classes of problems are relevant as most systems have uncertain properties, physical noise is rarely Gaussian, and the classical theory of linear random vibration applies to deterministic systems and can only deliver the first two moments of a system state if the noise is non-Gaussian. The method (1) is based on approximate representations of all or some of the random elements in the definition of linear random vibration problems by stochastic reduced-order models (SROMs), that is, simple random elements having a finite number of outcomes of unequal probabilities, (2) can be used to calculate statistics of a system state beyond its first two moments, and (3) establishes bounds on the discrepancy between exact and SROM-based solutions of linear random vibration problems. The implementation of the method has required to integrate existing and new numerical algorithms. Examples are presented to illustrate the application of the proposed method and assess its accuracy. Copyright © 2009 John Wiley & Sons, Ltd. [source] |