Home About us Contact | |||
First
Kinds of First Terms modified by First Selected AbstractsThe K - z diagram of FIRST radio sources identified in the Boötes and Cetus fieldsASTRONOMISCHE NACHRICHTEN, Issue 8 2009K. El Bouchefry Abstract This paper presents the Hubble diagram (K - z relation) for FIRST (Faint Images of the Radio Sky at 20 cm) radio sources identified in the Boötes and Cetus fields. The correlation between the K magnitude of the FIRST-NDWFS sample and the photometric redshifts found to be linear. The dispersion about the best fit line is given by 1.53 for the whole sample and 0.75 at z > 1. The paper also presents a composite K - z diagram of FIRST radio sources and low-frequency selected radio samples with progressively fainter flux-density limits (3CRR, 6C, 7CRS and the EIS-NVSS sample). The majority of FIRST radio sources lie fainter than the no evolution curve (3 L* galaxies) probably highlighting the fact that the galaxy luminosity is correlated with the radio power (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A survey of Low Luminosity Compact sourcesASTRONOMISCHE NACHRICHTEN, Issue 2-3 2009M. Kunert-Bajraszewska Abstract Based on the FIRST and SDSS catalogues a flux density limited sample of weak Compact Steep Spectrum (CSS) sources with radio luminosity below 1026WHz,1 at 1.4 GHz has been constructed. Our previous multifrequency observations of CSS sources have shown that low luminosity small-scale objects can be strong candidates for compact faders. This finding supports the idea that some small-size radio sources are short-lived phenomena because of a lack of significant fuelling. They never ,grow up' to become FRI or FRII objects. This new sample marks the start of a systematic study of the radio properties and morphologies of the population of low luminosity compact (LLC) objects. An investigation of this new sample should also lead to a better understanding of compact faders. In this paper, the results of the first stage of the new project , the L-band MERLIN observations of 44 low luminosity CSS sources are presented (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] The FIRST radio survey: Panchromatic properties of FIRST radio sources identified in the Boötes and Cetus fieldsASTRONOMISCHE NACHRICHTEN, Issue 1 2009K. El Bouchefry Abstract In this paper the availability of multi-wavelength optical/infrared information of FIRST (Faint Images of the Radio Sky at 20 cm) radio sources counterparts over ,9.2 deg2 in the Boötes field and ,2.4 deg2 in the Cetus field is exploited to infer the physical properties of the faint radio population. The radio sources optically identified have been divided into resolved galaxies and stellar-like objects finding that the faint radio population is mainly composed of early-type galaxies with very red colour (Bw , R , 4.6). A total number of 57 counterparts of FIRST radio sources have extremely red colour (R , K , 5). Photometric redshift from Hyperz implies that the Extremely Red Objects (EROs) counterparts to FIRST radio sources are mostly located in the range z = 0.7,2, with the bulk of the population at z , 1. Taking advantage of the near infrared imaging with FLAMEX (FLAMINGOS Extragalactic Infrared Survey), the EROs counterparts to FIRST radio sources are separated into passively-evolving and dusty star-forming galaxies using their RJK colours; the relatively blue J , K of these galaxies suggest that most are old elliptical galaxies (18/25) rather than dusty starburst galaxies (7/25). A total of 15 Distant Red Galaxy (DRGs) have been identified as counterparts to FIRST radio sources in the Cetus field and 3 DRGs in the Boötes field with J , K > 2.3 (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Changes in left ventricular ejection time and pulse transit time derived from finger photoplethysmogram and electrocardiogram during moderate haemorrhageCLINICAL PHYSIOLOGY AND FUNCTIONAL IMAGING, Issue 3 2009Paul M. Middleton Summary Objectives:, Early identification of haemorrhage is difficult when a bleeding site is not apparent. This study explored the potential use of the finger photoplethysmographic (PPG) waveform derived left ventricular ejection time (LVETp) and pulse transit time (PTT) for detecting blood loss, by using blood donation as a model of controlled mild to moderate haemorrhage. Methods:, This was a prospective, observational study carried out in a convenience sample of blood donors. LVETp, PTT and R-R interval (RRi) were computed from simultaneous measurement of the electrocardiogram (ECG) and the finger infrared photoplethysmogram obtained from 43 healthy volunteers during blood donation. The blood donation process was divided into four stages: (i) Pre-donation (PRE), (ii) first half of donation (FIRST), (iii) second half of donation (SECOND), (iv) post-donation (POST). Results and conclusions:, Shortening of LVETp from 303+/,2 to 293+/,3 ms (mean+/,SEM; P<0·01) and prolongation of PTT from 177+/,3 to 186+/,4 ms (P<0·01) were observed in 81% and 91% of subjects respectively when comparing PRE and POST. During blood donation, progressive blood loss produced falling trends in LVETp (P<0·01) and rising trends in PTT (P<0·01) in FIRST and SECOND, but a falling trend in RRi (P<0·01) was only observed in SECOND. Monitoring trends in timing variables derived from non-invasive ECG and finger PPG signals may facilitate detection of blood loss in the early phase. [source] From Model to Forecasting: A Multicenter Study in Emergency DepartmentsACADEMIC EMERGENCY MEDICINE, Issue 9 2010Mathias Wargon MD ACADEMIC EMERGENCY MEDICINE 2010; 17:970,978 © 2010 by the Society for Academic Emergency Medicine Abstract Objectives:, This study investigated whether mathematical models using calendar variables could identify the determinants of emergency department (ED) census over time in geographically close EDs and assessed the performance of long-term forecasts. Methods:, Daily visits in four EDs at academic hospitals in the Paris area were collected from 2004 to 2007. First, a general linear model (GLM) based on calendar variables was used to assess two consecutive periods of 2 years each to create and test the mathematical models. Second, 2007 ED attendance was forecasted, based on a training set of data from 2004 to 2006. These analyses were performed on data sets from each individual ED and in a virtual mega ED, grouping all of the visits. Models and forecast accuracy were evaluated by mean absolute percentage error (MAPE). Results:, The authors recorded 299,743 and 322,510 ED visits for the two periods, 2004,2005 and 2006,2007, respectively. The models accounted for up to 50% of the variations with a MAPE less than 10%. Visit patterns according to weekdays and holidays were different from one hospital to another, without seasonality. Influential factors changed over time within one ED, reducing the accuracy of forecasts. Forecasts led to a MAPE of 5.3% for the four EDs together and from 8.1% to 17.0% for each hospital. Conclusions:, Unexpectedly, in geographically close EDs over short periods of time, calendar determinants of attendance were different. In our setting, models and forecasts are more valuable to predict the combined ED attendance of several hospitals. In similar settings where resources are shared between facilities, these mathematical models could be a valuable tool to anticipate staff needs and site allocation. [source] FEATURE-BASED KOREAN GRAMMAR UTILIZING LEARNED CONSTRAINT RULESCOMPUTATIONAL INTELLIGENCE, Issue 1 2005So-Young Park In this paper, we propose a feature-based Korean grammar utilizing the learned constraint rules in order to improve parsing efficiency. The proposed grammar consists of feature structures, feature operations, and constraint rules; and it has the following characteristics. First, a feature structure includes several features to express useful linguistic information for Korean parsing. Second, a feature operation generating a new feature structure is restricted to the binary-branching form which can deal with Korean properties such as variable word order and constituent ellipsis. Third, constraint rules improve efficiency by preventing feature operations from generating spurious feature structures. Moreover, these rules are learned from a Korean treebank by a decision tree learning algorithm. The experimental results show that the feature-based Korean grammar can reduce the number of candidates by a third of candidates at most and it runs 1.5 , 2 times faster than a CFG on a statistical parser. [source] Preference-Based Constrained Optimization with CP-NetsCOMPUTATIONAL INTELLIGENCE, Issue 2 2004Craig Boutilier Many artificial intelligence (AI) tasks, such as product configuration, decision support, and the construction of autonomous agents, involve a process of constrained optimization, that is, optimization of behavior or choices subject to given constraints. In this paper we present an approach for constrained optimization based on a set of hard constraints and a preference ordering represented using a CP-network,a graphical model for representing qualitative preference information. This approach offers both pragmatic and computational advantages. First, it provides a convenient and intuitive tool for specifying the problem, and in particular, the decision maker's preferences. Second, it admits an algorithm for finding the most preferred feasible (Pareto-optimal) outcomes that has the following anytime property: the set of preferred feasible outcomes are enumerated without backtracking. In particular, the first feasible solution generated by this algorithm is Pareto optimal. [source] HIGH-DIMENSIONAL LEARNING FRAMEWORK FOR ADAPTIVE DOCUMENT FILTERING,COMPUTATIONAL INTELLIGENCE, Issue 1 2003Wai Lam We investigate the unique requirements of the adaptive textual document filtering problem and propose a new high-dimensional on-line learning framework, known as the REPGER (relevant feature pool with good training example retrieval rule) algorithm to tackle this problem. Our algorithm possesses three characteristics. First, it maintains a pool of selective features with potentially high predictive power to predict document relevance. Second, besides retrieving documents according to their predicted relevance, it also retrieves incoming documents that are considered good training examples. Third, it can dynamically adjust the dissemination threshold throughout the filtering process so as to maintain a good filtering performance in a fully interactive environment. We have conducted experiments on three document corpora, namely, Associated Press, Foreign Broadcast Information Service, and Wall Street Journal to compare the performance of our REPGER algorithm with two existing on-line learning algorithms. The results demonstrate that our REPGER algorithm gives better performance most of the time. Comparison with the TREC (Text Retrieval Conference) adaptive text filtering track participants was also made. The result shows that our REPGER algorithm is comparable to them. [source] Automated Negotiation from Declarative Contract DescriptionsCOMPUTATIONAL INTELLIGENCE, Issue 4 2002Daniel M. Reeves Our approach for automating the negotiation of business contracts proceeds in three broad steps. First, determine the structure of the negotiation process by applying general knowledge about auctions and domain,specific knowledge about the contract subject along with preferences from potential buyers and sellers. Second, translate the determined negotiation structure into an operational specification for an auction platform. Third, after the negotiation has completed, map the negotiation results to a final contract. We have implemented a prototype which supports these steps by employing a declarative specification (in courteous logic programs) of (1) high,level knowledge about alternative negotiation structures, (2) general,case rules about auction parameters, (3) rules to map the auction parameters to a specific auction platform, and (4) special,case rules for subject domains. We demonstrate the flexibility of this approach by automatically generating several alternative negotiation structures for the domain of travel shopping in a trading agent competition. [source] Interactive shadowing for 2D AnimeCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009Eiji Sugisaki Abstract In this paper, we propose an instant shadow generation technique for 2D animation, especially Japanese Anime. In traditional 2D Anime production, the entire animation including shadows is drawn by hand so that it takes long time to complete. Shadows play an important role in the creation of symbolic visual effects. However shadows are not always drawn due to time constraints and lack of animators especially when the production schedule is tight. To solve this problem, we develop an easy shadowing approach that enables animators to easily create a layer of shadow and its animation based on the character's shapes. Our approach is both instant and intuitive. The only inputs required are character or object shapes in input animation sequence with alpha value generally used in the Anime production pipeline. First, shadows are automatically rendered on a virtual plane by using a Shadow Map1 based on these inputs. Then the rendered shadows can be edited by simple operations and simplified by the Gaussian Filter. Several special effects such as blurring can be applied to the rendered shadow at the same time. Compared to existing approaches, ours is more efficient and effective to handle automatic shadowing in real-time. Copyright © 2009 John Wiley & Sons, Ltd. [source] 3D virtual simulator for breast plastic surgeryCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008Youngjun Kim Abstract We have proposed novel 3D virtual simulation software for breast plastic surgery. Our software comprises two processes: a 3D torso modeling and a virtual simulation of the surgery result. First, image-based modeling is performed in order to obtain a female subject's 3D torso data. Our image-based modeling method utilizes a template model, and this is deformed according to the patient's photographs. For the deformation, we applied procrustes analysis and radial basis functions (RBF). In order to enhance reality, the subject's photographs are mapped onto a mesh. Second, from the modeled subject data, we simulate the subject's virtual appearance after the plastic surgery by morphing the shape of the breasts. We solve the simulation problem by an example-based approach. The subject's virtual shape is obtained from the relations between the pair sets of feature points from previous patients' photographs obtained before and after the surgery. Copyright © 2008 John Wiley & Sons, Ltd. [source] As-consistent-As-possible compositing of virtual objects and video sequencesCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006Guofeng Zhang Abstract We present an efficient approach that merges the virtual objects into video sequences taken by a freely moving camera in a realistic manner. The composition is visually and geometrically consistent through three main steps. First, a robust camera tracking algorithm based on key frames is proposed, which precisely recovers the focal length with a novel multi-frame strategy. Next, the concerned 3D models of the real scenes are reconstructed by means of an extended multi-baseline algorithm. Finally, the virtual objects in the form of 3D models are integrated into the real scenes, with special cares on the interaction consistency including shadow casting, occlusions, and object animation. A variety of experiments have been implemented, which demonstrate the robustness and efficiency of our approach. Copyright © 2006 John Wiley & Sons, Ltd. [source] Natural head motion synthesis driven by acoustic prosodic featuresCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005Carlos Busso Abstract Natural head motion is important to realistic facial animation and engaging human,computer interactions. In this paper, we present a novel data-driven approach to synthesize appropriate head motion by sampling from trained hidden markov models (HMMs). First, while an actress recited a corpus specifically designed to elicit various emotions, her 3D head motion was captured and further processed to construct a head motion database that included synchronized speech information. Then, an HMM for each discrete head motion representation (derived directly from data using vector quantization) was created by using acoustic prosodic features derived from speech. Finally, first-order Markov models and interpolation techniques were used to smooth the synthesized sequence. Our comparison experiments and novel synthesis results show that synthesized head motions follow the temporal dynamic behavior of real human subjects. Copyright © 2005 John Wiley & Sons, Ltd. [source] Rendering natural waters taking fluorescence into accountCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2004By E. Cerezo Abstract The aim of the work presented here is to generalize a system, developed to treat general participating media, to make it capable of considering volumetric inelastic processes such as fluorescence. Our system, based on the discrete ordinates method, is adequate to treat a complex participating medium such as natural waters as it is prepared to deal with not only anisotropic but also highly peaked phase functions, as well as to consider the spectral behaviour of the medium's characteristic parameters. It is also able to generate detailed quantitative illumination information, such as the amount of light that reaches the medium boundaries or the amount of light absorbed in each of the medium voxels. First, we present an extended form of the radiative transfer equation to incorporate inelastic volumetric phenomena. Then, we discuss the necessary changes in the general calculation scheme to include inelastic scattering. We have applied all this to consider the most common inelastic effect in natural waters: fluorescence in chlorophyll-a. Copyright © 2004 John Wiley & Sons, Ltd. [source] Improving realism of a surgery simulator: linear anisotropic elasticity, complex interactions and force extrapolationCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3 2002Guillaume Picinbono Abstract In this article, we describe the latest developments of the minimally invasive hepatic surgery simulator prototype developed at INRIA. The goal of this simulator is to provide a realistic training test bed to perform laparoscopic procedures. Therefore, its main functionality is to simulate the action of virtual laparoscopic surgical instruments for deforming and cutting tridimensional anatomical models. Throughout this paper, we present the general features of this simulator including the implementation of several biomechanical models and the integration of two force-feedback devices in the simulation platform. More precisely, we describe three new important developments that improve the overall realism of our simulator. First, we have developed biomechanical models, based on linear elasticity and finite element theory, that include the notion of anisotropic deformation. Indeed, we have generalized the linear elastic behaviour of anatomical models to ,transversally isotropic' materials, i.e. materials having a different behaviour in a given direction. We have also added to the volumetric model an external elastic membrane representing the ,liver capsule', a rather stiff skin surrounding the liver, which creates a kind of ,surface anisotropy'. Second, we have developed new contact models between surgical instruments and soft tissue models. For instance, after detecting a contact with an instrument, we define specific boundary constraints on deformable models to represent various forms of interactions with a surgical tool, such as sliding, gripping, cutting or burning. In addition, we compute the reaction forces that should be felt by the user manipulating the force-feedback devices. The last improvement is related to the problem of haptic rendering. Currently, we are able to achieve a simulation frequency of 25,Hz (visual real time) with anatomical models of complex geometry and behaviour. But to achieve a good haptic feedback requires a frequency update of applied forces typically above 300,Hz (haptic real time). Thus, we propose a force extrapolation algorithm in order to reach haptic real time. Copyright © 2002 John Wiley & Sons, Ltd. [source] Realistic and efficient rendering of free-form knitwearCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2001Hua Zhong Abstract We present a method for rendering knitwear on free-form surfaces. This method has three main advantages. First, it renders yarn microstructure realistically and efficiently. Second, the rendering efficiency of yarn microstructure does not come at the price of ignoring the interactions between the neighboring yarn loops. Such interactions are modeled in our system to further enhance realism. Finally, our approach gives the user intuitive control on a few key aspects of knitwear appearance: the fluffiness of the yarn and the irregularity in the positioning of the yarn loops. The result is a system that efficiently produces highly realistic rendering of free-form knitwear with user control on key aspects of visual appearance. Copyright © 2001 John Wiley & Sons, Ltd. [source] Möbius Transformations For Global Intrinsic Symmetry AnalysisCOMPUTER GRAPHICS FORUM, Issue 5 2010Vladimir G. Kim The goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self-isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self-mappings. During experiments with a benchmark set of meshes augmented with human-specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry. [source] Interactive Cover Design Considering Physical ConstraintsCOMPUTER GRAPHICS FORUM, Issue 7 2009Yuki Igarashi Abstract We developed an interactive system to design a customized cover for a given three-dimensional (3D) object such as a camera, teapot, or car. The system first computes the convex hull of the input geometry. The user segments it into several cloth patches by drawing on the 3D surface. This paper provides two technical contributions. First, it introduces a specialized flattening algorithm for cover patches. It makes each two-dimensional edge in the flattened pattern equal to or longer than the original 3D edge; a smaller patch would fail to cover the object, and a larger patch would result in extra wrinkles. Second, it introduces a mechanism to verify that the user-specified opening would be large enough for the object to be removed. Starting with the initial configuration, the system virtually "pulls" the object out of the cover while avoiding excessive stretching of cloth patches. We used the system to design real covers and confirmed that it functions as intended. [source] Semi-Automatic Time-Series Transfer Functions via Temporal Clustering and SequencingCOMPUTER GRAPHICS FORUM, Issue 3 2009Jonathan Woodring Abstract When creating transfer functions for time-varying data, it is not clear what range of values to use for classification, as data value ranges and distributions change over time. In order to generate time-varying transfer functions, we search the data for classes that have similar behavior over time, assuming that data points that behave similarly belong to the same feature. We utilize a method we call temporal clustering and sequencing to find dynamic features in value space and create a corresponding transfer function. First, clustering finds groups of data points that have the same value space activity over time. Then, sequencing derives a progression of clusters over time, creating chains that follow value distribution changes. Finally, the cluster sequences are used to create transfer functions, as sequences describe the value range distributions over time in a data set. [source] Physically Guided Animation of TreesCOMPUTER GRAPHICS FORUM, Issue 2 2009Ralf Habel Abstract This paper presents a new method to animate the interaction of a tree with wind both realistically and in real time. The main idea is to combine statistical observations with physical properties in two major parts of tree animation. First, the interaction of a single branch with the forces applied to it is approximated by a novel efficient two step nonlinear deformation method, allowing arbitrary continuous deformations and circumventing the need to segment a branch to model its deformation behavior. Second, the interaction of wind with the dynamic system representing a tree is statistically modeled. By precomputing the response function of branches to turbulent wind in frequency space, the motion of a branch can be synthesized efficiently by sampling a 2D motion texture. Using a hierarchical form of vertex displacement, both methods can be combined in a single vertex shader, fully leveraging the power of modern GPUs to realistically animate thousands of branches and ten thousands of leaves at practically no cost. [source] High-Quality Adaptive Soft Shadow MappingCOMPUTER GRAPHICS FORUM, Issue 3 2007Gaël Guennebaud Abstract The recent soft shadow mapping technique [GBP06] allows the rendering in real-time of convincing soft shadows on complex and dynamic scenes using a single shadow map. While attractive, this method suffers from shadow overestimation and becomes both expensive and approximate when dealing with large penumbrae. This paper proposes new solutions removing these limitations and hence providing an efficient and practical technique for soft shadow generation. First, we propose a new visibility computation procedure based on the detection of occluder contours, that is more accurate and faster while reducing aliasing. Secondly, we present a shadow map multi-resolution strategy keeping the computation complexity almost independent on the light size while maintaining high-quality rendering. Finally, we propose a view-dependent adaptive strategy, that automatically reduces the screen resolution in the region of large penumbrae, thus allowing us to keep very high frame rates in any situation. [source] Applied Geometry:Discrete Differential Calculus for GraphicsCOMPUTER GRAPHICS FORUM, Issue 3 2004Mathieu Desbrun Geometry has been extensively studied for centuries, almost exclusively from a differential point of view. However, with the advent of the digital age, the interest directed to smooth surfaces has now partially shifted due to the growing importance of discrete geometry. From 3D surfaces in graphics to higher dimensional manifolds in mechanics, computational sciences must deal with sampled geometric data on a daily basis-hence our interest in Applied Geometry. In this talk we cover different aspects of Applied Geometry. First, we discuss the problem of Shape Approximation, where an initial surface is accurately discretized (i.e., remeshed) using anisotropic elements through error minimization. Second, once we have a discrete geometry to work with, we briefly show how to develop a full- blown discrete calculus on such discrete manifolds, allowing us to manipulate functions, vector fields, or even tensors while preserving the fundamental structures and invariants of the differential case. We will emphasize the applicability of our discrete variational approach to geometry by showing results on surface parameterization, smoothing, and remeshing, as well as virtual actors and thin-shell simulation. Joint work with: Pierre Alliez (INRIA), David Cohen-Steiner (Duke U.), Eitan Grinspun (NYU), Anil Hirani (Caltech), Jerrold E. Marsden (Caltech), Mark Meyer (Pixar), Fred Pighin (USC), Peter Schröder (Caltech), Yiying Tong (USC). [source] A Knowledge Formalization and Aggregation-Based Method for the Assessment of Dam PerformanceCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2010Corinne Curt The model's inputs are the whole set of available information and data: visual observations, monitoring measurements, calculated data, and documents related to design and construction processes. First, a formal grid is proposed to structure the inputs. It is composed of six fields: name, definition, scale, references as anchorage points on the scale, and spatial and temporal characteristics. Structured inputs are called indicators. Second, an indicator aggregation method is proposed that allows obtaining not only the dam performance but also the assessment of its design and construction practices. The methodology is illustrated mainly with the internal erosion mechanism through the embankment, but results concerning other failure modes are also provided. An application of the method for monitoring dams through time is given. [source] Reference-Free Damage Classification Based on Cluster AnalysisCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2008Hoon Sohn The ultimate goal of this study was to develop an in-site non-destructive testing (NDT) technique that can continuously and autonomously inspect the bonding condition between a carbon FRP (CFRP) layer and a host reinforced concrete (RC) structure, when the CRFP layer is used for strengthening the RC structure. The uniqueness of this reference-free NDT is two-fold: First, features, which are sensitive to CFRP debonding but insensitive to operational and environmental variations of the structure, have been extracted only from current data without direct comparison with previously obtained baseline data. Second, damage classification is performed instantaneously without relying on predetermined decision boundaries. The extraction of the reference-free features is accomplished based on the concept of time reversal acoustics, and the instantaneous decision-making is achieved using cluster analysis. Monotonic and fatigue load tests of large-scale CFRP-strengthened RC beams are conducted to demonstrate the potential of the proposed reference-free debonding monitoring technique. Based on the experimental studies, it has been shown that the proposed reference-free NDT technique may minimize false alarms of debonding and unnecessary data interpretation by end users. [source] A Polymorphic Dynamic Network Loading ModelCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008Nie Yu (Marco) The polymorphism, realized through a general node-link interface and proper discretization, offers several prominent advantages. First of all, PDNL allows road facilities in the same network to be represented by different traffic flow models based on the tradeoff of efficiency and realism and/or the characteristics of the targeted problem. Second, new macroscopic link/node models can be easily plugged into the framework and compared against existing ones. Third, PDNL decouples links and nodes in network loading, and thus opens the door to parallel computing. Finally, PDNL keeps track of individual vehicular quanta of arbitrary size, which makes it possible to replicate analytical loading results as closely as desired. PDNL, thus, offers an ideal platform for studying both analytical dynamic traffic assignment problems of different kinds and macroscopic traffic simulation. [source] Robust Transportation Network Design Under Demand UncertaintyCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2007Satish V. Ukkusuri The origin,destination trip matrices are taken as random variables with known probability distributions. Instead of finding optimal network design solutions for a given future scenario, we are concerned with solutions that are in some sense "good" for a variety of demand realizations. We introduce a definition of robustness accounting for the planner's required degree of robustness. We propose a formulation of the robust network design problem (RNDP) and develop a methodology based on genetic algorithm (GA) to solve the RNDP. The proposed model generates globally near-optimal network design solutions, f, based on the planner's input for robustness. The study makes two important contributions to the network design literature. First, robust network design solutions are significantly different from the deterministic NDPs and not accounting for them could potentially underestimate the network-wide impacts. Second, systematic evaluation of the performance of the model and solution algorithm is conducted on different test networks and budget levels to explore the efficacy of this approach. The results highlight the importance of accounting for robustness in transportation planning and the proposed approach is capable of producing high-quality solutions. [source] Assessment of Protection Systems for Buried Steel Pipelines Endangered by RockfallCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005Bernhard Pichler First, a gravel-based protection system (GBPS) is investigated, that is, a pipeline buried in sandy gravel is considered. To assess the load-carrying behavior of this structure when subjected to rockfall, a finite element (FE) model has been developed. The development and the validation of this structural model are strictly separated, that is, they are based on two physically and statistically independent sets of experiments. Subsequently, scenarios of rockfall onto a gravel-buried steel pipe are analyzed considering different boundary conditions and structural dimensions. Following the conclusions drawn from these numerical analyses, an enhanced protection system (EPS) is proposed. It consists of gravel as an energy-absorbing and impact-damping system and a buried steel plate resting on walls made of concrete representing a load-carrying structural component. The potential and the limitations of both protection systems are discussed in detail. [source] Decentralized Parametric Damage Detection Based on Neural NetworksCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2002Zhishen Wu In this paper, based on the concept of decentralized information structures and artificial neural networks, a decentralized parametric identification method for damage detection of structures with multi-degrees-of-freedom (MDOF) is conducted. First, a decentralized approach is presented for damage detection of substructures of an MDOF structure system by using neural networks. The displacement and velocity measurements from a substructure of a healthy structure system and the restoring force corresponding to this substructure are used to train the decentralized detection neural networks for the purpose of identifying the corresponding substructure. By using the trained decentralized detection neural networks, the difference of the interstory restoring force between the damaged substructures and the undamaged substructures can be calculated. An evaluation index, that is, relative root mean square (RRMS) error, is presented to evaluate the condition of each substructure for the purpose of health monitoring. Although neural networks have been widely used for nonparametric identification, in this paper, the decentralized parametric evaluation neural networks for substructures are trained for parametric identification. Based on the trained decentralized parametric evaluation neural networks and the RRMS error of substructures, the structural parameter of stiffness of each subsystem can be forecast with high accuracy. The effectiveness of the decentralized parametric identification is evaluated through numerical simulations. It is shown that the decentralized parametric evaluation method has the potential of being a practical tool for a damage detection methodology applied to structure-unknown smart civil structures. [source] Life-Cycle Performance of RC Bridges: Probabilistic ApproachCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2000Dimitri V. Val This article addresses the problem of reliability assessment of reinforced concrete (RC) bridges during their service life. First, a probabilistic model for assessment of time-dependent reliability of RC bridges is presented, with particular emphasis placed on deterioration of bridges due to corrosion of reinforcing steel. The model takes into account uncertainties associated with materials properties, bridge dimensions, loads, and corrosion initiation and propagation. Time-dependent reliabilities are considered for ultimate and serviceability limit states. Examples illustrate the application of the model. Second, updating of predictive probabilistic models using site-specific data is considered. Bayesian statistical theory that provides a mathematical basis for such updating is outlined briefly, and its implementation for the updating of information about bridge properties using inspection data is described in more detail. An example illustrates the effect of this updating on bridge reliability. [source] Understanding chemical shielding tensors using group theory, MO analysis, and modern density-functional theoryCONCEPTS IN MAGNETIC RESONANCE, Issue 2 2009Cory M. Widdifield Abstract In this article, the relationships between molecular symmetry, molecular electronic structure, and chemical shielding (CS) tensors are discussed. First, a brief background on the CS interaction and CS tensors is given. Then, the visualization of the three-dimensional nature of CS is described. A simple method for examining the relationship between molecular orbitals (MOs) and CS tensors, using point groups and direct products of irreducible representations of MOs and rotational operators, is outlined. A number of specific examples are discussed, involving CS tensors of different nuclei in molecules of different symmetries, including ethene (D2h), hydrogen fluoride (C,v), trifluorophosphine (C3v), and water (C2v). Finally, we review the application of this method to CS tensors in several interesting cases previously discussed in the literature, including acetylene (D,h), the PtX42, series of compounds (D4h) and the decamethylaluminocenium cation (D5d). © 2009 Wiley Periodicals, Inc. Concepts Magn Reson Part A 34A: 91,123, 2009. [source] |