Home About us Contact | |||
Analytical
Terms modified by Analytical Selected AbstractsSupporting Bulk Synchronous Parallelism with a high-bandwidth optical interconnectCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2004I. Gourlay Abstract The list of applications requiring high-performance computing resources is constantly growing. The cost of inter-processor communication is critical in determining the performance of massively parallel computing systems for many of these applications. This paper considers the feasibility of a commodity processor-based system which uses a free-space optical interconnect. A novel architecture, based on this technology, is presented. Analytical and simulation results based on an implementation of BSP (Bulk Synchronous Parallelism) are presented, indicating that a significant performance enhancement, over architectures using conventional interconnect technology, is possible. Copyright © 2004 John Wiley & Sons, Ltd. [source] Optimal Design of the Online Auction Channel: Analytical, Empirical, and Computational Insights,DECISION SCIENCES, Issue 4 2002Ravi Bapna ABSTRACT The focus of this study is on business-to-consumer (B2C) online auctions made possible by the advent of electronic commerce over an open-source, ubiquitous Internet Protocol (IP) computer network. This work presents an analytical model that characterizes the revenue generation process for a popular B2C online auction, namely, Yankee auctions. Such auctions sell multiple identical units of a good to multiple buyers using an ascending and open auction mechanism. The methodologies used to validate the analytical model range from empirical analysis to simulation. A key contribution of this study is the design of a partitioning scheme of the discrete valuation space of the bidders such that equilibrium points with higher revenue structures become identifiable and feasible. Our analysis indicates that the auctioneers are, most of the time, far away from the optimal choice of key control factors such as the bid increment, resulting in substantial losses in a market with already tight margins. With this in mind, we put forward a portfolio of tools, varying in their level of abstraction and information intensity requirements, which help auctioneers maximize their revenues. [source] Combined Analytical and Phonon-Tracking Approaches to Model Thermal Conductivity of Etched and Annealed Nanoporous SiliconADVANCED ENGINEERING MATERIALS, Issue 10 2009Jaona Randrianalisoa A combination of analytical and phonon-tracking approaches is proposed to predict thermal conductivity of porous nanostructured thick materials. The analytical approach derives the thermal conductivity as function of the intrinsic properties of the material and properties characterizing the phonon interaction with pore walls. [source] Sampling and analytical plus subsampling variance components for five soil indicators observed at regional scaleEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 5 2009B. G. Rawlins Summary When comparing soil baseline measurements with resampled values there are four main sources of error. These are: i) location (errors in relocating the sample site), ii) sampling errors (representing the site with a sample of material) iii) subsampling error (selecting material for analysis) and iv) analytical error (error in laboratory measurements). In general we cannot separate the subsampling and analytical sources of error (since we always analyse a different subsample of a specimen), so in this paper we combine these two sources into subsampling plus analytical error. More information is required on the relative magnitudes of location and sampling errors for the design of effective resampling strategies to monitor changes in soil indicators. Recently completed soil surveys of the UK with widely differing soils included a duplicate site and subsampling protocol to quantify ii), and the sum of iii) and iv) above. Sampling variances are estimated from measurements on duplicate samples , two samples collected on a support of side length 20 m separated by a short distance (21 m). Analytical and subsampling variances are estimated from analyses of two subsamples from each duplicate site. After accounting for variation caused by region, parent material class and land use, we undertook a nested analysis of data from 196 duplicate sites across three regions to estimate the relative magnitude of medium-scale (between sites), sampling and subsampling plus analytical variance components, for five topsoil indicators: total metal concentrations of copper (Cu), nickel (Ni) and zinc (Zn), soil pH and soil organic carbon (SOC) content. The variance components for each indicator diminish by about an order of magnitude from medium-scale, to sampling, to analytical plus subsampling. Each of the three fixed effects (parent material, land use and region) were statistically significant for each of the five indicators. The most effective way to minimise the overall uncertainty of our observations at sample sites is to reduce the sampling variance. [source] INTEGRATING DYNAMIC SYSTEMS MATERIALS INTO A MECHANICAL ENGINEERING CURRICULUM THROUGH INNOVATIVE USE OF WEB-BASED ACQUISITION AND HANDS-ON APPLICATION AND USE OF VIRTUAL GRAPHICAL USER INTERFACES Part 3: Dynamic Systems,Analytical and Experimental System CharacterizationEXPERIMENTAL TECHNIQUES, Issue 1 2008Pete Avitabile First page of article [source] Analytical and experimental studies on fatigue crack path under complex multi-axial loadingFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 4 2006L. REIS ABSTRACT In real engineering components and structures, many accidental failures are due to unexpected or additional loadings, such as additional bending or torsion, etc. Fractographical analyses of the failure surface and the crack orientation are helpful for identifying the effects of the non-proportional multi-axial loading. There are many factors that influence fatigue crack paths. This paper studies the effects of multi-axial loading path on the crack path. Two kinds of materials were studied and compared in this paper: AISI 303 stainless steel and 42CrMo4 steel. Experiments were conducted in a biaxial testing machine INSTRON 8800. Six different biaxial loading paths were selected and applied in the tests to observe the effects of multi-axial loading paths on the additional hardening, fatigue life and the crack propagation orientation. Fractographic analyses of the plane orientations of crack initiation and propagation were carried out by optical microscope and SEM approaches. It was shown that the two materials studied had different crack orientations under the same loading path, due to their different cyclic plasticity behaviour and different sensitivity to non-proportional loading. Theoretical predictions of the damage plane were made using the critical plane approaches such as the Brown,Miller, the Findley, the Wang,Brown, the Fatemi,Socie, the Smith,Watson,Topper and the Liu's criteria. Comparisons of the predicted orientation of the damage plane with the experimental observations show that the critical plane models give satisfactory predictions for the orientations of early crack growth of the 42CrMo4 steel, but less accurate predictions were obtained for the AISI 303 stainless steel. This observation appears to show that the applicability of the fatigue models is dependent on the material type and multi-axial microstructure characteristics. [source] Analytical and 3-D numerical modelling of Mt. Etna (Italy) volcano inflationGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2005A. Bonaccorso SUMMARY Since 1993, geodetic data obtained by different techniques (GPS, EDM, SAR, levelling) have detected a consistent inflation of the Mt. Etna volcano. The inflation, culminating with the 1998,2001 strong explosive activity from summit craters and recent 2001 and 2002 flank eruptions, is interpreted in terms of magma ascent and refilling of the volcanic plumbing system and reservoirs. We have modelled the 1993,1997 EDM and GPS data by 3-D pressurized sources to infer the position and dimension of the magma reservoir. We have performed analytical inversions of the observed deformation using both spheroidal and ellipsoidal sources embedded in a homogeneous elastic half-space and by applying different inversion methods. Solutions for these types of sources show evidence of a vertically elongated magma reservoir located 6 km beneath the summit craters. The maximum elevation of topography is comparable to such depth and strong heterogeneities are inferred from seismic tomography; in order to assess their importance, further 3-D numerical models, employing source parameters extracted from analytical models, have been developed using the finite-element technique. The deformation predicted by all the models considered shows a general agreement with the 1993,1997 data, suggesting the primary role of a pressure source, while the complexities of the medium play a minor role under elastic conditions. However, major discrepancies between data and models are located in the SE sector, suggesting that sliding along potential detachment surfaces may contribute to amplify deformation during the inflation. For the first time realistic features of Mt. Etna are studied by a 3-D numerical model characterized by the topography and lateral variations of elastic structure, providing a framework for a deeper insight into the relationships between internal sources and tectonic structures. [source] He's homotopy perturbation method for two-dimensional heat conduction equation: Comparison with finite element methodHEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 4 2010M. Jalaal Abstract Heat conduction appears in almost all natural and industrial processes. In the current study, a two-dimensional heat conduction equation with different complex Dirichlet boundary conditions has been studied. An analytical solution for the temperature distribution and gradient is derived using the homotopy perturbation method (HPM). Unlike most of previous studies in the field of analytical solution with homotopy-based methods which investigate the ODEs, we focus on the partial differential equation (PDE). Employing the Taylor series, the gained series has been converted to an exact expression describing the temperature distribution in the computational domain. Problems were also solved numerically employing the finite element method (FEM). Analytical and numerical results were compared with each other and excellent agreement was obtained. The present investigation shows the effectiveness of the HPM for the solution of PDEs and represents an exact solution for a practical problem. The mathematical procedure proves that the present mathematical method is much simpler than other analytical techniques due to using a combination of homotopy analysis and classic perturbation method. The current mathematical solution can be used in further analytical and numerical surveys as well as related natural and industrial applications even with complex boundary conditions as a simple accurate technique. © 2010 Wiley Periodicals, Inc. Heat Trans Asian Res; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.20292 [source] Modelling of contaminant transport through landfill liners using EFGMINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 7 2010R. Praveen Kumar Abstract Modelling of contaminant transport through landfill liners and natural soil deposits is an important area of research activity in geoenvironmental engineering. Conventional mesh-based numerical methods depend on mesh/grid size and element connectivity and possess some difficulties when dealing with advection-dominant transport problems. In the present investigation, an attempt has been made to provide a simple but sufficiently accurate methodology for numerical simulation of the two-dimensional contaminant transport through the saturated homogeneous porous media and landfill liners using element-free Galerkin method (EFGM). In the EFGM, an approximate solution is constructed entirely in terms of a set of nodes and no characterization of the interrelationship of the nodes is needed. The EFGM employs moving least-square approximants to approximate the function and uses the Lagrange multiplier method for imposing essential boundary conditions. The results of the EFGM are validated using experimental results. Analytical and finite element solutions are also used to compare the results of the EFGM. In order to test the practical applicability and performance of the EFGM, three case studies of contaminant transport through the landfill liners are presented. A good agreement is obtained between the results of the EFGM and the field investigation data. Copyright © 2009 John Wiley & Sons, Ltd. [source] Analytical and numerical solution of the elastodynamic strip load problemINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2008A. Verruijt Abstract Analytical and numerical solutions of the elastodynamic problem of an instantaneous strip load on a half space are presented and compared. The analytical solution is obtained using the De Hoop,Cagniard method, and the numerical solution is obtained using the dynamic module of the finite element package Plaxis. The purpose of the paper is to validate the numerical solution by comparison with a completely analytical solution, and to verify that the main characteristics of the analytical solution are also obtained in the numerical solution. Particular attention is paid to the magnitude, the velocity, and the shape of the Rayleigh wave disturbances. Copyright © 2007 John Wiley & Sons, Ltd. [source] A continued-fraction-based high-order transmitting boundary for wave propagation in unbounded domains of arbitrary geometryINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2008Mohammad Hossein Bazyar Abstract A high-order local transmitting boundary is developed to model the propagation of elastic waves in unbounded domains. This transmitting boundary is applicable to scalar and vector waves, to unbounded domains of arbitrary geometry and to anisotropic materials. The formulation is based on a continued-fraction solution of the dynamic-stiffness matrix of an unbounded domain. The coefficient matrices of the continued fraction are determined recursively from the scaled boundary finite element equation in dynamic stiffness. The solution converges rapidly over the whole frequency range as the order of the continued fraction increases. Using the continued-fraction solution and introducing auxiliary variables, a high-order local transmitting boundary is formulated as an equation of motion with symmetric and frequency-independent coefficient matrices. It can be coupled seamlessly with finite elements. Standard procedures in structural dynamics are directly applicable for evaluating the response in the frequency and time domains. Analytical and numerical examples demonstrate the high rate of convergence and efficiency of this high-order local transmitting boundary. Copyright © 2007 John Wiley & Sons, Ltd. [source] A 2-D time-domain boundary element method with dampingINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2001Feng Jin Abstract A new material damping model which is convenient for use in the time-domain boundary element method (TDBEM) is presented and implemented in a proposed procedure. Since only fundamental solutions for linear elastic material are employed, the procedure has high efficiency and is easy to be integrated into current TDBEM codes. Analytical and numerical results for benchmark problems demonstrate that the accuracy of the proposed method is high. Copyright © 2001 John Wiley & Sons, Ltd. [source] Analytical and numerical investigation of the solar chimney power plant systemsINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 11 2006Ming Tingzhen Abstract There is a surge in the use of the solar chimney power plant in the recent years which accomplishes the task of converting solar energy into kinetic energy. As the existing models are insufficient to accurately describe the mechanism, a more comprehensive model is advanced in this paper to evaluate the performance of a solar chimney power plant system, in which the effects of various parameters on the relative static pressure, driving force, power output and efficiency have been further investigated. Using the solar chimney prototype in Manzanares, Spain, as a practical example, the numerical studies are performed to explore the geometric modifications on the system performance, which show reasonable agreement with the analytical model. Copyright © 2005 John Wiley & Sons, Ltd. [source] Teletraffic capacity of CDMA cellular mobile networks and adaptive antennasINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 4 2002Abdulaziz S. Al-Ruwais The teletraffic capacity of a CDMA mobile network with adaptive antennas at the receiver base station is considered and a simplified expression for the system outage probability associated with the teletraffic capacity is obtained. Analytical as well as numerical results show that the outage probability, and consequently the teletraffic capacity of the system, is improved using adaptive antennas at the receiver base station. Copyright © 2002 John Wiley & Sons, Ltd. [source] "Revenue Accounting" in the Age of E-Commerce: A Framework for Conceptual, Analytical, and Exchange Rate ConsiderationsJOURNAL OF INTERNATIONAL FINANCIAL MANAGEMENT & ACCOUNTING, Issue 1 2002Jonathan C. Glover This paper explores "revenue accounting" in contrast to traditional "cost accounting". Revenue accounting serves the information needs of managers and investors in planning and controlling a firm's sales activities and their financial consequences, especially in the age of e-commerce. Weaknesses of traditional accounting have become particularly evident recently, for example, the lack of 1) revenue mileposts, 2) revenue sustainability measurements, and 3) intangibles capitalization. The paper emphasizes the need to develop a conceptual framework of revenue accounting and, as a tentative measure, proposes five basic postulates and five operational postulates of revenue accounting. On the side of analytical frameworks, the paper explores some tentative remedies for the weaknesses. Several revenue mileposts are explored to gauge progress in earning revenues and a Markov process is applied to an example involving mileposts. Revenue momentum, measured by the exponential smoothing method, is examined as a way of getting feedback on revenue sustainability; and the use of the sustainability concept in the analysis of "fixed and variable revenues" is illustrated. A project-oriented approach in a manner similar to capital budgeting and to Reserve Recognition Accounting is proposed by treating each customer as a project. Standardization of forecasts are also considered as an important way of bypassing the capitalization issue. Finally, while e-commerce is inherently global, issues specific to global operations are highlighted, namely, exchange rate issues when venture capitalists and the start-up company use different currencies producing different rates of return on the same project. [source] Critical Discourse Analysis in Organizational Studies: Towards an Integrationist MethodologyJOURNAL OF MANAGEMENT STUDIES, Issue 6 2010Lilie Chouliaraki abstract We engage with Leitch and Palmer's (2010) analysis of Critical Discourse Analytical (CDA) scholarship in organizational and management studies, in order to argue that, whereas they rightly point to the need for further reflexivity in the field, their recommendation for a strict methodological protocol in CDA studies may be reproducing some of the problems they identify in their analysis. We put forward an alternative, relational-dialectic conception of discourse that defends an integrationist orientation to research methodology, privileging trans-disciplinarity over rigour. [source] Estimation of gene frequency and heterozygosity from pooled samplesMOLECULAR ECOLOGY RESOURCES, Issue 3 2002K. Ritland Abstract Pooling of DNA samples can significantly reduce the effort of population studies with DNA markers. I present a statistical model and numerical method for estimating gene frequency when pooled DNA is assayed for the presence/absence of alleles. Analytical and Monte-Carlo methods examined estimation variance and bias, and hence optimal pool size, under a triangular allele frequency distribution. For gene frequency of rarer alleles, the optimal number of pooled individuals is approximately the inverse of the gene frequency. For heterozygosity, the optimal pool is approximately half the allele number; this results in pools containing, on average, 60% of possible alleles. [source] Relativistically expanding cylindrical electromagnetic fieldsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2009K. N. Gourgouliatos ABSTRACT We study relativistically expanding electromagnetic fields of cylindrical geometry. The fields emerge from the side surface of a cylinder and are invariant under translations parallel to the axis of the cylinder. The expansion velocity is in the radial direction and is parametrized by v=R/(ct). We consider force-free magnetic fields by setting the total force the electromagnetic field exerts on the charges and the currents equal to zero. Analytical and semi-analytical separable solutions are found for the relativistic problem. In the non-relativistic limit, the mathematical form of the equations is similar to equations that have already been studied in static systems of the same geometry. [source] Two stellar mass functions combined into one by the random sampling model of the initial mass functionMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2000Bruce G. Elmegreen The turnover in the stellar initial mass function (IMF) at low mass suggests the presence of two independent mass functions that combine in different ways above and below a characteristic mass given by the thermal Jeans mass in the cloud. In the random sampling model introduced earlier, the Salpeter IMF at intermediate to high mass follows primarily from the hierarchical structure of interstellar clouds, which is sampled by various star formation processes and converted into stars at the local dynamical rate. This power-law part is independent of the details of star formation inside each clump and therefore has a universal character. The flat part of the IMF at low mass is proposed here to result from a second, unrelated, physical process that determines only the probability distribution function for final star mass inside a clump of a given mass, and is independent of both this clump mass and the overall cloud structure. Both processes operate for all potentially unstable clumps in a cloud, regardless of mass, but only the first shows up above the thermal Jeans mass, and only the second shows up below this mass. Analytical and stochastic models of the IMF that are based on the uniform application of these two functions for all masses reproduce the observations well. [source] Analytical, Risk Assessment, and Remedial Implications Due to the Co-Presence of Polychlorinated Biphenyls and Terphenyls at Inactive Hazardous Waste SitesREMEDIATION, Issue 1 2000James J. Pagano Investigations conducted at three inactive hazardous waste sites in New York State have confirmed the co-presence of polychlorinated hiphenyls (PCBs) and polychlorinated terphenyls (PCTs) in soils, sediments, and biota. The PCTs at all three sites were positively identified as Aroclor 5432, with the most probable source being the hydraulic fluid Pydraul 312A utilized for high-temperature applications. The identification of the lower-chlorinated PCT formulations in environmental samples is problematical, since PCT Aroclors 5432 and 5442 are not chromatographically distinct from the higher-chlorinated (PCB) Aroclors 1254, 1260, 1262, and 1268 using conventional gas chromatography,electron capture detection. Results from this study indicate that U.S. Environmental Protection Agency (USEPA) approved PCB methods routinely utilized by most commercial laboratories based on Florisil adsorption column chromatography cleanup are inadequate to produce valid chromatographic separation and quantitative results with soils, sediment, and biota samples containing both PCBs and PCTs. The presence of co-eluting PCBs and PCTs precludes accurate quantitation due to significant differences in PCB/PCT electron capture detector response factors, and the potential for misidentification of PCT Aroclors as higher chlorinated PCB Aroclors. A method based on alumina column adsorption chromatography was used, allowing for the accurate identification and quantitation of PCB and PCT Aroclors. The results of this study suggest that the utilization of alumina adsorption column separation may have applicability and regulatory significance to other industrially contaminated sites which historically used Pydraul 312A. Inferences. [source] Zum Kontaktverhalten zwischen suspensionsgestützten Ortbetonwänden und dem anstehenden BodenBAUTECHNIK, Issue 11 2007Anna Arwanitaki Dipl.-Ing. Eingangsparameter für analytische und numerische Berechnungen eines Baugrubenverbaus ist der Wandreibungswinkel. Dieser beschreibt die Fähigkeit, wieviel Schubspannungen aus dem Boden an der Grenzfläche Boden,Verbauwand von der Verbauwand bei einer vorgegebenen Normalspannung aufgenommen werden können. Die national gültigen Normen und Empfehlungen geben für Berechnungen im GZ1 einen Wandreibungswinkel von |,| , ,/2 vor. Für den Nachweis der Gebrauchstauglichkeit GZ2 hat sich die Methode der Finiten Elemente etabliert. Mit zunehmender Verfeinerung der Berechnungen stellt sich somit die Frage, ob der Ansatz von |,| , ,/2 noch zutreffend ist. Bei der Herstellung von Schlitzwänden sowie von unverrohrten Bohrpfahlwänden erfolgt der Bodenaushub im Schutze einer Stützflüssigkeit aus Wasser und Bentonit. Nach Erreichen der Schlitzendtiefe wird im Kontraktorbetonverfahren die Stützsuspension von unten nach oben verdrängt. Dabei können Reste der Suspension oder des entstehenden Filterkuchens in der Kontaktfläche Boden,Verbauwand verbleiben und den Wandreibungswinkel beeinflussen. In-situ-Proben des Filterkuchens einer Schlitzwandbaugrube zeigten, dass die Filterkuchenfestsubstanz ein Gemisch aus Bentonit und dem anstehenden Boden ist. Durch die Aushubarbeiten vermischt sich der anstehende Boden mit der Suspension, wobei die feinen Kornfraktionen durch die Fließgrenze der Suspension in Schwebe gehalten werden. Der durch den Filtrationsprozess an der Erdwandung entstehende Filterkuchen kann daher nicht mehr als Schmierschicht aus Bentonit bezeichnet werden, sondern besitzt eine beachtliche Scherfestigkeit. Dieser Beitrag stellt Ergebnisse von Baustellen- und Laboruntersuchungen zur Beschaffenheit des Filterkuchens und zur Ermittlung des Kontaktverhaltens des Boden-Schlitzwand-Systems vor. Skin friction of cast-in-place walls. Analytical and numerical calculations of retaining structures require the wall friction angle as an input parameter. It is specified as the maximal shear strength of the concrete-soil interface due to normal effective load. For the design of diaphragm walls the national engineering standards recommend an angle of wall skin friction of |,| , ,/2. In the framework of present design numerical calculations are performed to determine the deformation behaviour of structures, so that the contact formulation becomes fundamental. Bentonite suspensions are used to support the sides of excavation for diaphragm walls and uncased cast-in-place piles. When concrete is cast by tremie methods the filter cake remains adhering on side walls and becomes part of the concrete-soil interface and influences the characteristics of wall skin friction. In-situ specimens of the filter cake were taken from a diaphragm wall and examinations reveal that the filter cake consists of bentonite and fine soil particles. Due to the excavation process fine particles from the soil are suspended into the supporting fluid due to the liquid limit of the bentonite slurry. Thus, the suspension, in a process of filtration into the surrounding soil, forms a filter cake with a certain shear strength caused by the fine soil particles. This paper presents the results of field and laboratory tests for the investigation of the effective contact behaviour between cast-in-place walls and the surrounding soil. [source] Dynamic strategy for teaching structural analysisCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 4 2002Jamal El-Rimawi Abstract Improving students' understanding of structural analysis within a limited time represents a challenge for both students and lecturers. As a result, emphasis is usually placed on either the analytical or conceptual aspect of the subject. This paper argues that, within the same time frame, the simultaneous development of both aspects could lead to a better understanding of the subject. The development and implementation of a computer program suitable for this purpose is described, and its application to the compatibility method is illustrated. © 2003 Wiley Periodicals, Inc. Comput Appl Eng Educ 10: 194,203, 2002; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.10028 [source] Significance of Modeling Error in Structural Parameter EstimationCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001Masoud Sanayei Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source] On the connectivity of Bluetooth-based ad hoc networksCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2009P. Crescenzi Abstract We study the connectivity properties of a family of random graphs that closely model the Bluetooth's device discovery process, where each device tries to connect to other devices within its visibility range in order to establish reliable communication channels yielding a connected topology. Specifically, we provide both analytical and experimental evidence that when the visibility range of each node (i.e. device) is limited to a vanishing function of n, the total number of nodes in the system, full connectivity can still be achieved with high probability by letting each node connect only to a ,small' number of visible neighbors. Our results extend previous studies, where connectivity properties were analyzed only for the case of a constant visibility range, and provide evidence that Bluetooth can indeed be used for establishing large ad hoc networks. Copyright © 2008 John Wiley & Sons, Ltd. [source] Stochastic and Relaxation Processes in Argon by Measurements of Dynamic Breakdown VoltagesCONTRIBUTIONS TO PLASMA PHYSICS, Issue 7 2005V. Lj. Abstract Statistically based measurements of breakdown voltages Ub and breakdown delay times td and their variations in transient regimes of establishment and relaxation of discharges are a convenient method to study stochastic processes of electrical breakdown of gases, as well as relaxation kinetics in afterglow. In this paper the measurements and statistical analysis of the dynamic breakdown voltages Ub for linearly rising (ramp) pulses in argon at 1.33 mbar and the rates of voltage rise k up to 800 V s,1 are presented. It was found that electrical breakdowns by linearly rising (ramp) pulses is an inhomogeneous Poisson process caused by primary and secondary ionization coefficients , , , and electron yield Y variations on the voltage (time). The experimental breakdown voltage distributions were fitted by theoretical distributions by applying approximate analytical and numerical models. The afterglow kinetics in argon was studied based on the dependence of the initial electron yield on the relaxation time Y0 (, ) derived from fitting of distributions. The space charge decay was explained by the surface recombination of nitrogen atoms present as impurities. The afterglow kinetics and the surface recombination coefficients on the gas tube and cathode were determined from a gas-phase model. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Channel Coordination for a Supply Chain with a Risk-Neutral Manufacturer and a Loss-Averse Retailer,DECISION SCIENCES, Issue 3 2007Charles X. Wang ABSTRACT This articles considers a decentralized supply chain in which a single manufacturer is selling a perishable product to a single retailer facing uncertain demand. It differs from traditional supply chain contract models in two ways. First, while traditional supply chain models are based on risk neutrality, this article takes the viewpoint of behavioral principal,agency theory and assumes the manufacturer is risk neutral and the retailer is loss averse. Second, while gain/loss (GL) sharing is common in practice, there is a lack of analysis of GL-sharing contracts in the supply chain contract literature. This article investigates the role of a GL-sharing provision for mitigating the loss-aversion effect, which drives down the retailer order quantity and total supply chain profit. We analyze contracts that include GL-sharing-and-buyback (GLB) credit provisions as well as the special cases of GL contracts and buyback contracts. Our analytical and numerical results lend insight into how a manufacturer can design a contract to improve total supply chain, manufacturer, and retailer performance. In particular, we show that there exists a special class of distribution-free GLB contracts that can coordinate the supply chain and arbitrarily allocate the expected supply chain profit between the manufacturer and retailer; in contrast with other contracts, the parameter values for contracts in this class do not depend on the probability distribution of market demand. This feature is meaningful in practice because (i) the probability distribution of demand faced by a retailer is typically unknown by the manufacturer and (ii) a manufacturer can offer the same contract to multiple noncompeting retailers that differ by demand distribution and still coordinate the supply chains. [source] Effects of Orlistat on Visceral Fat After LiposuctionDERMATOLOGIC SURGERY, Issue 3 2009TERESA MONTOYA MD BACKGROUND Liposuction can aggravate metabolic complications associated with obesity. It has been shown that the recovery of weight lost through these interventions is associated with body fat redistribution toward the visceral cavity, increasing metabolic risk factors for coronary heart disease such as insulin resistance and high triglyceride levels. OBJECTIVES The aim of this study was to evaluate the consequences of liposuction on body mass redistribution and metabolic parameters 6 months after surgery and to evaluate the use of orlistat treatment (tetrahydrolipstatin) in controlling these parameters. METHODS A population of 31 women with a mean body mass index of 26.17±3.9 kg/m2 and undergoing liposuction of more than 1,000 cm3, was studied. Twelve of them were treated postsurgery with 120 mg of orlistat every 8 hours for the following 6 months. Anthropometric, analytical, and radiological (computed tomography) tests were performed to quantify visceral fat area before surgery and 6 months after surgery. RESULTS Despite weight loss after liposuction, visceral fat was not modified. Patients treated with orlistat showed a greater reduction in visceral fat, although not statistically significant. Orlistat use induced a reduction in low-density lipoprotein cholesterol values of 20.0±22.5 mg/dL, compared with an increase of 8.46±20.1 mg/dL in controls (p=.07). CONCLUSIONS Visceral fat does not decrease despite weight loss after liposuction. Orlistat use postliposuction might be a useful tool because it shows a tendency to reduce visceral fat and improve blood lipids profile. [source] Making the case for objective performance metrics in newborn screening by tandem mass spectrometryDEVELOPMENTAL DISABILITIES RESEARCH REVIEW, Issue 4 2006Piero Rinaldo Abstract The expansion of newborn screening programs to include multiplex testing by tandem mass spectrometry requires understanding and close monitoring of performance metrics. This is not done consistently because of lack of defined targets, and interlaboratory comparison is almost nonexistent. Between July 2004 and April 2006 (N = 176,185 cases), the overall performance metrics of the Minnesota program, limited to MS/MS testing, were as follows: detection rate 1:1,816, positive predictive value 37% (54% in 2006 till date), and false positive rate 0.09%. The repeat rate and the proportion of cases with abnormal findings actually been reported are new metrics proposed here as an objective mean to express the overall noise in a program, where noise is defined as the total number of abnormal results obtained using a given set of cut-off values. On the basis of our experience, we propose the following targets as evidence of adequate analytical and postanalytical performance: detection rate 1:3,000 or higher, positive predictive value >20%, and false positive rate <0.3%. © 2006 Wiley-Liss, Inc. MRDD Research Reviews 2006;12:255,261. [source] Testing the intermediate disturbance hypothesis: when will there be two peaks of diversity?DIVERSITY AND DISTRIBUTIONS, Issue 1 2005Karin Johst ABSTRACT Succession after disturbances generates a mosaic of patches in different successional stages. The intermediate disturbance hypothesis predicts that intermediate disturbances lead to the highest diversity of these stages on a regional scale resulting in a hump-shaped diversity,disturbance curve. We tested this prediction using field data of forest succession and hypothetical succession scenarios in combination with analytical and simulation models. According to our study the main factors shaping the diversity,disturbance curve and the position of the diversity maximum were the transition times between the successional stages, the transition type, neighbourhood effects and the choice of diversity measure. Although many scenarios confirmed the intermediate disturbance hypothesis we found that deviations in the form of two diversity maximums were possible. Such bimodal diversity,disturbance curves occurred when early and late successional stages were separated by one or more long-lived (compared to the early stages) intermediate successional stages. Although the field data which met these conditions among all those tested were rare (one of six), the consequences of detecting two peaks are fundamental. The impact of disturbances on biodiversity can be complex and deviate from a hump-shaped curve. [source] Long-term landscape evolution: linking tectonics and surface processesEARTH SURFACE PROCESSES AND LANDFORMS, Issue 3 2007Paul Bishop Abstract Research in landscape evolution over millions to tens of millions of years slowed considerably in the mid-20th century, when Davisian and other approaches to geomorphology were replaced by functional, morphometric and ultimately process-based approaches. Hack's scheme of dynamic equilibrium in landscape evolution was perhaps the major theoretical contribution to long-term landscape evolution between the 1950s and about 1990, but it essentially ,looked back' to Davis for its springboard to a viewpoint contrary to that of Davis, as did less widely known schemes, such as Crickmay's hypothesis of unequal activity. Since about 1990, the field of long-term landscape evolution has blossomed again, stimulated by the plate tectonics revolution and its re-forging of the link between tectonics and topography, and by the development of numerical models that explore the links between tectonic processes and surface processes. This numerical modelling of landscape evolution has been built around formulation of bedrock river processes and slope processes, and has mostly focused on high-elevation passive continental margins and convergent zones; these models now routinely include flexural and denudational isostasy. Major breakthroughs in analytical and geochronological techniques have been of profound relevance to all of the above. Low-temperature thermochronology, and in particular apatite fission track analysis and (U,Th)/He analysis in apatite, have enabled rates of rock uplift and denudational exhumation from relatively shallow crustal depths (up to about 4 km) to be determined directly from, in effect, rock hand specimens. In a few situations, (U,Th)/He analysis has been used to determine the antiquity of major, long-wavelength topography. Cosmogenic isotope analysis has enabled the determination of the ,ages' of bedrock and sedimentary surfaces, and/or the rates of denudation of these surfaces. These latter advances represent in some ways a ,holy grail' in geomorphology in that they enable determination of ,dates and rates' of geomorphological processes directly from rock surfaces. The increasing availability of analytical techniques such as cosmogenic isotope analysis should mean that much larger data sets become possible and lead to more sophisticated analyses, such as probability density functions (PDFs) of cosmogenic ages and even of cosmogenic isotope concentrations (CICs). PDFs of isotope concentrations must be a function of catchment area geomorphology (including tectonics) and it is at least theoretically possible to infer aspects of source area geomorphology and geomorphological processes from PDFs of CICs in sediments (,detrital CICs'). Thus it may be possible to use PDFs of detrital CICs in basin sediments as a tool to infer aspects of the sediments' source area geomorphology and tectonics, complementing the standard sedimentological textural and compositional approaches to such issues. One of the most stimulating of recent conceptual advances has followed the considerations of the relationships between tectonics, climate and surface processes and especially the recognition of the importance of denudational isostasy in driving rock uplift (i.e. in driving tectonics and crustal processes). Attention has been focused very directly on surface processes and on the ways in which they may ,drive' rock uplift and thus even influence sub-surface crustal conditions, such as pressure and temperature. Consequently, the broader geoscience communities are looking to geomorphologists to provide more detailed information on rates and processes of bedrock channel incision, as well as on catchment responses to such bedrock channel processes. More sophisticated numerical models of processes in bedrock channels and on their flanking hillslopes are required. In current numerical models of long-term evolution of hillslopes and interfluves, for example, the simple dependency on slope of both the fluvial and hillslope components of these models means that a Davisian-type of landscape evolution characterized by slope lowering is inevitably ,confirmed' by the models. In numerical modelling, the next advances will require better parameterized algorithms for hillslope processes, and more sophisticated formulations of bedrock channel incision processes, incorporating, for example, the effects of sediment shielding of the bed. Such increasing sophistication must be matched by careful assessment and testing of model outputs using pre-established criteria and tests. Confirmation by these more sophisticated Davisian-type numerical models of slope lowering under conditions of tectonic stability (no active rock uplift), and of constant slope angle and steady-state landscape under conditions of ongoing rock uplift, will indicate that the Davis and Hack models are not mutually exclusive. A Hack-type model (or a variant of it, incorporating slope adjustment to rock strength rather than to regolith strength) will apply to active settings where there is sufficient stream power and/or sediment flux for channels to incise at the rate of rock uplift. Post-orogenic settings of decreased (or zero) active rock uplift would be characterized by a Davisian scheme of declining slope angles and non-steady-state (or transient) landscapes. Such post-orogenic landscapes deserve much more attention than they have received of late, not least because the intriguing questions they pose about the preservation of ancient landscapes were hinted at in passing in the 1960s and have recently re-surfaced. As we begin to ask again some of the grand questions that lay at the heart of geomorphology in its earliest days, large-scale geomorphology is on the threshold of another ,golden' era to match that of the first half of the 20th century, when cyclical approaches underpinned virtually all geomorphological work. Copyright © 2007 John Wiley & Sons, Ltd. [source] |