Second

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Second

  • few second
  • first second
  • one second
  • several second

  • Terms modified by Second

  • second administration
  • second aim
  • second analysis
  • second application
  • second approach
  • second article
  • second assessment
  • second attempt
  • second audit
  • second author
  • second best
  • second biopsy
  • second birth
  • second birthday
  • second cancer
  • second cancers
  • second case
  • second category
  • second century ad
  • second century bc
  • second challenge
  • second child
  • second clade
  • second class
  • second cluster
  • second clutch
  • second cohort
  • second component
  • second condition
  • second contribution
  • second control group
  • second coordination sphere
  • second course
  • second crystal form
  • second cycle
  • second day
  • second decade
  • second derivative
  • second dimension
  • second dose
  • second drug
  • second dutch national survey
  • second edition
  • second effect
  • second egg
  • second electron transfer
  • second enzyme
  • second episode
  • second evaluation
  • second event
  • second examination
  • second example
  • second exon
  • second experiment
  • second exposure
  • second factor
  • second family
  • second finger
  • second follow-up
  • second form
  • second gene
  • second generation
  • second goal
  • second grade
  • second group
  • second growing season
  • second growth
  • second half
  • second harmonic
  • second harmonic generation
  • second hit
  • second home
  • second hour
  • second hydration shell
  • second hypothesis
  • second infusion
  • second injection
  • second instance
  • second interview
  • second issue
  • second kind
  • second language
  • second language acquisition
  • second language acquisition research
  • second language learner
  • second language learning
  • second law
  • second layer
  • second leading cause
  • second level
  • second life
  • second ligand
  • second line
  • second look
  • second male
  • second malignancy
  • second malignant neoplasm
  • second measurement
  • second mechanism
  • second messenger
  • second messenger pathway
  • second messenger system
  • second method
  • second millennium
  • second millennium bc
  • second mode
  • second model
  • second molar
  • second molar tooth
  • second molecule
  • second moment
  • second month
  • second mutation
  • second night
  • second objective
  • second occasion
  • second only
  • second operation
  • second opinion
  • second order
  • second paper
  • second part
  • second patient
  • second pattern
  • second peak
  • second period
  • second phase
  • second polar body
  • second polymorph
  • second population
  • second position
  • second postnatal week
  • second postoperative day
  • second prediction
  • second pregnancy
  • second premolar
  • second primary
  • second primary cancers
  • second primary malignancy
  • second primary tumor
  • second problem
  • second procedure
  • second question
  • second questionnaire
  • second relapse
  • second remission
  • second report
  • second republic
  • second round
  • second route
  • second run
  • second sample
  • second scenario
  • second season
  • second section
  • second semester
  • second series
  • second session
  • second set
  • second shell
  • second signal
  • second site
  • second species
  • second stage
  • second step
  • second stimulus
  • second strand
  • second strategy
  • second structure
  • second study
  • second substrate
  • second subsystem
  • second summer
  • second surgery
  • second survey
  • second target
  • second task
  • second term
  • second test
  • second time
  • second toe
  • second transition
  • second transmembrane domain
  • second transplant
  • second transplantation
  • second treatment
  • second trial
  • second trimester
  • second trimester pregnancy termination
  • second twin
  • second type
  • second use
  • second version
  • second virial coefficient
  • second visit
  • second wave
  • second week
  • second world war
  • second year
  • second year student

  • Selected Abstracts


    MRI and 1H MRS of The Breast: Presence of a Choline Peak as Malignancy Marker is Related to k21 Value of the Tumor in Patients with Invasive Ductal Carcinoma

    THE BREAST JOURNAL, Issue 6 2008
    Patricia R. Geraghty MD
    Abstract:, To assess which specific morphologic features, enhancement patterns, or pharmacokinetic parameters on breast Magnetic Resonance Imaging (MRI) could predict a false-negative outcome of Proton MR Spectroscopy (1H MRS) exam in patients with invasive breast cancer. Sixteen patients with invasive ductal carcinoma of the breast were prospectively included and underwent both, contrast-enhanced breast MRI and 1H MRS examination of the breast. The MR images were reviewed and the lesions morphologic features, enhancement patterns and pharmacokinetic parameters (k21-value) were scored according to the ACR BI-RADS-MRI lexicon criteria. For the in vivo MRS studies, each spectrum was evaluated for the presence of choline based on consensus reading. Breast MRI and 1H MRS data were compared to histopathologic findings. In vivo 1H MRS detected a choline peak in 14/16 (88%) cancers. A false-negative 1H MRS study occurred in 2/16 (14%) cancer patients. K21 values differed between both groups: the 14 choline positive cancers had k21 values ranging from 0.01 to 0.20/second (mean 0.083/second), whereas the two choline-negative cancers showed k21 values of 0.03 and 0.05/second, respectively (mean 0.040/second). Also enhancement kinetics did differ between both groups; typically both cancers that were choline-negative showed a late phase plateau (100%), whereas this was only shown in 5/14 (36%) of the choline positive cases. There was no difference between both groups with regard to morphologic features on MRI. This study showed that false-negative 1H MRS examinations do occur in breast cancer patients, and that the presence of a choline peak on 1H MRS as malignancy marker is related to the k21 value of the invasive tumor being imaged. [source]


    Dynamic T1 estimation of brain tumors using double-echo dynamic MR imaging

    JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 1 2003
    Yoshiyuki Ishimori RT
    Abstract Purpose To assess the clinical utility of a new method for real-time estimation of T1 during the first pass of contrast agent by using this method to examine brain tumors. Materials and Methods The multi-phase spoiled gradient-echo pulse sequence using the double-echo magnetic resonance (MR) technique was modified. In the first half of the pulse sequence, the flip angle was varied systematically. Then, static T1 values were calculated using differences in MR signal intensities between different flip angles. In the latter half of this sequence, changes in absolute T1 were calculated using differences in signal intensities before and after injection of contrast agent. The double-echo MR data were used to minimize the T2* effect. Five cases of neurinoma and seven cases of meningioma were examined. Changes in T1 during the first pass of contrast agent were compared between neurinoma and meningioma. Results Changes in absolute T1 were clearly demonstrated on the parametric map. Although the changes in absolute T1 during the first pass of contrast agent did not allow differentiation between the two types of tumors, the mean gradient after the first pass was statistically higher for neurinoma than for meningioma (P < 0.05; meningioma, 0.011 ± 0.012 second,1/second; neurinoma, 0.034 ± 0.020 second,1/second). Conclusion The present method appears to be useful for estimation of dynamic T1 changes in brain tumors in clinical settings. J. Magn. Reson. Imaging 2003;18:113,120. © 2003 Wiley-Liss, Inc. [source]


    Vulnerability of the superficial zone of immature articular cartilage to compressive injury

    ARTHRITIS & RHEUMATISM, Issue 10 2010
    Bernd Rolauffs
    Objective The zonal composition and functioning of adult articular cartilage causes depth-dependent responses to compressive injury. In immature cartilage, shear and compressive moduli as well as collagen and sulfated glycosaminoglycan (sGAG) content also vary with depth. However, there is little understanding of the depth-dependent damage caused by injury. Since injury to immature knee joints most often causes articular cartilage lesions, this study was undertaken to characterize the zonal dependence of biomechanical, biochemical, and matrix-associated changes caused by compressive injury. Methods Disks from the superficial and deeper zones of bovine calves were biomechanically characterized. Injury to the disks was achieved by applying a final strain of 50% compression at 100%/second, followed by biomechanical recharacterization. Tissue compaction upon injury as well as sGAG density, sGAG loss, and biosynthesis were measured. Collagen fiber orientation and matrix damage were assessed using histology, diffraction-enhanced x-ray imaging, and texture analysis. Results Injured superficial zone disks showed surface disruption, tissue compaction by 20.3 ± 4.3% (mean ± SEM), and immediate biomechanical impairment that was revealed by a mean ± SEM decrease in dynamic stiffness to 7.1 ± 3.3% of the value before injury and equilibrium moduli that were below the level of detection. Tissue areas that appeared intact on histology showed clear textural alterations. Injured deeper zone disks showed collagen crimping but remained undamaged and biomechanically intact. Superficial zone disks did not lose sGAG immediately after injury, but lost 17.8 ± 1.4% of sGAG after 48 hours; deeper zone disks lost only 2.8 ± 0.3% of sGAG content. Biomechanical impairment was associated primarily with structural damage. Conclusion The soft superficial zone of immature cartilage is vulnerable to compressive injury, causing superficial matrix disruption, extensive compaction, and textural alteration, which results in immediate loss of biomechanical function. In conjunction with delayed superficial sGAG loss, these changes may predispose the articular surface to further softening and tissue damage, thus increasing the risk of development of secondary osteoarthritis. [source]


    SECOND LOOK COLONOSCOPY: INDICATION AND REQUIREMENTS

    DIGESTIVE ENDOSCOPY, Issue 2009
    Jean-Francois Rey
    Background:, There are circumstances when a colonoscopy should be repeated after a short interval following the first endoscopic procedure which has not completely fulfilled its objective. Review of the literature:, A second look colonoscopy is proposed when there remains a doubt about missed neoplastic lesions, either because the intestinal preparation was poor or because the video-endoscope did not achieved a complete course in the colon. The second look colonoscopy is also proposed at a short interval when it is suspected that the endoscopic removal of a single or of multiple neoplastic lesions was incomplete and that a complement of treatment is required. When the initial endoscopic procedure has completely fulfilled its objective, a second look colonoscopy can be proposed at longer intervals in surveillance programs. The intervals in surveillance after polypectomy are now adapted to the initial findings according to established guidelines. This also applies to the surveillance of incident focal cancer in patients suffering from a chronic inflammatory bowel disease. Conclusion:, Finally, in most developed countries, a priority is attributed to screening of colorectal cancer and focus is given on quality assurance of colonoscopy which is considered as the gold standard procedure in the secondary prevention of colorectal cancer. [source]


    Changes in left ventricular ejection time and pulse transit time derived from finger photoplethysmogram and electrocardiogram during moderate haemorrhage

    CLINICAL PHYSIOLOGY AND FUNCTIONAL IMAGING, Issue 3 2009
    Paul M. Middleton
    Summary Objectives:, Early identification of haemorrhage is difficult when a bleeding site is not apparent. This study explored the potential use of the finger photoplethysmographic (PPG) waveform derived left ventricular ejection time (LVETp) and pulse transit time (PTT) for detecting blood loss, by using blood donation as a model of controlled mild to moderate haemorrhage. Methods:, This was a prospective, observational study carried out in a convenience sample of blood donors. LVETp, PTT and R-R interval (RRi) were computed from simultaneous measurement of the electrocardiogram (ECG) and the finger infrared photoplethysmogram obtained from 43 healthy volunteers during blood donation. The blood donation process was divided into four stages: (i) Pre-donation (PRE), (ii) first half of donation (FIRST), (iii) second half of donation (SECOND), (iv) post-donation (POST). Results and conclusions:, Shortening of LVETp from 303+/,2 to 293+/,3 ms (mean+/,SEM; P<0·01) and prolongation of PTT from 177+/,3 to 186+/,4 ms (P<0·01) were observed in 81% and 91% of subjects respectively when comparing PRE and POST. During blood donation, progressive blood loss produced falling trends in LVETp (P<0·01) and rising trends in PTT (P<0·01) in FIRST and SECOND, but a falling trend in RRi (P<0·01) was only observed in SECOND. Monitoring trends in timing variables derived from non-invasive ECG and finger PPG signals may facilitate detection of blood loss in the early phase. [source]


    [Commentary] SMOKING CESSATION IN 10 SECONDS,A GENERAL PRACTITIONER'S VIEW

    ADDICTION, Issue 2 2008
    MARTIN EDWARDS
    No abstract is available for this article. [source]


    From Model to Forecasting: A Multicenter Study in Emergency Departments

    ACADEMIC EMERGENCY MEDICINE, Issue 9 2010
    Mathias Wargon MD
    ACADEMIC EMERGENCY MEDICINE 2010; 17:970,978 © 2010 by the Society for Academic Emergency Medicine Abstract Objectives:, This study investigated whether mathematical models using calendar variables could identify the determinants of emergency department (ED) census over time in geographically close EDs and assessed the performance of long-term forecasts. Methods:, Daily visits in four EDs at academic hospitals in the Paris area were collected from 2004 to 2007. First, a general linear model (GLM) based on calendar variables was used to assess two consecutive periods of 2 years each to create and test the mathematical models. Second, 2007 ED attendance was forecasted, based on a training set of data from 2004 to 2006. These analyses were performed on data sets from each individual ED and in a virtual mega ED, grouping all of the visits. Models and forecast accuracy were evaluated by mean absolute percentage error (MAPE). Results:, The authors recorded 299,743 and 322,510 ED visits for the two periods, 2004,2005 and 2006,2007, respectively. The models accounted for up to 50% of the variations with a MAPE less than 10%. Visit patterns according to weekdays and holidays were different from one hospital to another, without seasonality. Influential factors changed over time within one ED, reducing the accuracy of forecasts. Forecasts led to a MAPE of 5.3% for the four EDs together and from 8.1% to 17.0% for each hospital. Conclusions:, Unexpectedly, in geographically close EDs over short periods of time, calendar determinants of attendance were different. In our setting, models and forecasts are more valuable to predict the combined ED attendance of several hospitals. In similar settings where resources are shared between facilities, these mathematical models could be a valuable tool to anticipate staff needs and site allocation. [source]


    FEATURE-BASED KOREAN GRAMMAR UTILIZING LEARNED CONSTRAINT RULES

    COMPUTATIONAL INTELLIGENCE, Issue 1 2005
    So-Young Park
    In this paper, we propose a feature-based Korean grammar utilizing the learned constraint rules in order to improve parsing efficiency. The proposed grammar consists of feature structures, feature operations, and constraint rules; and it has the following characteristics. First, a feature structure includes several features to express useful linguistic information for Korean parsing. Second, a feature operation generating a new feature structure is restricted to the binary-branching form which can deal with Korean properties such as variable word order and constituent ellipsis. Third, constraint rules improve efficiency by preventing feature operations from generating spurious feature structures. Moreover, these rules are learned from a Korean treebank by a decision tree learning algorithm. The experimental results show that the feature-based Korean grammar can reduce the number of candidates by a third of candidates at most and it runs 1.5 , 2 times faster than a CFG on a statistical parser. [source]


    Preference-Based Constrained Optimization with CP-Nets

    COMPUTATIONAL INTELLIGENCE, Issue 2 2004
    Craig Boutilier
    Many artificial intelligence (AI) tasks, such as product configuration, decision support, and the construction of autonomous agents, involve a process of constrained optimization, that is, optimization of behavior or choices subject to given constraints. In this paper we present an approach for constrained optimization based on a set of hard constraints and a preference ordering represented using a CP-network,a graphical model for representing qualitative preference information. This approach offers both pragmatic and computational advantages. First, it provides a convenient and intuitive tool for specifying the problem, and in particular, the decision maker's preferences. Second, it admits an algorithm for finding the most preferred feasible (Pareto-optimal) outcomes that has the following anytime property: the set of preferred feasible outcomes are enumerated without backtracking. In particular, the first feasible solution generated by this algorithm is Pareto optimal. [source]


    HIGH-DIMENSIONAL LEARNING FRAMEWORK FOR ADAPTIVE DOCUMENT FILTERING,

    COMPUTATIONAL INTELLIGENCE, Issue 1 2003
    Wai Lam
    We investigate the unique requirements of the adaptive textual document filtering problem and propose a new high-dimensional on-line learning framework, known as the REPGER (relevant feature pool with good training example retrieval rule) algorithm to tackle this problem. Our algorithm possesses three characteristics. First, it maintains a pool of selective features with potentially high predictive power to predict document relevance. Second, besides retrieving documents according to their predicted relevance, it also retrieves incoming documents that are considered good training examples. Third, it can dynamically adjust the dissemination threshold throughout the filtering process so as to maintain a good filtering performance in a fully interactive environment. We have conducted experiments on three document corpora, namely, Associated Press, Foreign Broadcast Information Service, and Wall Street Journal to compare the performance of our REPGER algorithm with two existing on-line learning algorithms. The results demonstrate that our REPGER algorithm gives better performance most of the time. Comparison with the TREC (Text Retrieval Conference) adaptive text filtering track participants was also made. The result shows that our REPGER algorithm is comparable to them. [source]


    Automated Negotiation from Declarative Contract Descriptions

    COMPUTATIONAL INTELLIGENCE, Issue 4 2002
    Daniel M. Reeves
    Our approach for automating the negotiation of business contracts proceeds in three broad steps. First, determine the structure of the negotiation process by applying general knowledge about auctions and domain,specific knowledge about the contract subject along with preferences from potential buyers and sellers. Second, translate the determined negotiation structure into an operational specification for an auction platform. Third, after the negotiation has completed, map the negotiation results to a final contract. We have implemented a prototype which supports these steps by employing a declarative specification (in courteous logic programs) of (1) high,level knowledge about alternative negotiation structures, (2) general,case rules about auction parameters, (3) rules to map the auction parameters to a specific auction platform, and (4) special,case rules for subject domains. We demonstrate the flexibility of this approach by automatically generating several alternative negotiation structures for the domain of travel shopping in a trading agent competition. [source]


    3D virtual simulator for breast plastic surgery

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Youngjun Kim
    Abstract We have proposed novel 3D virtual simulation software for breast plastic surgery. Our software comprises two processes: a 3D torso modeling and a virtual simulation of the surgery result. First, image-based modeling is performed in order to obtain a female subject's 3D torso data. Our image-based modeling method utilizes a template model, and this is deformed according to the patient's photographs. For the deformation, we applied procrustes analysis and radial basis functions (RBF). In order to enhance reality, the subject's photographs are mapped onto a mesh. Second, from the modeled subject data, we simulate the subject's virtual appearance after the plastic surgery by morphing the shape of the breasts. We solve the simulation problem by an example-based approach. The subject's virtual shape is obtained from the relations between the pair sets of feature points from previous patients' photographs obtained before and after the surgery. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Real-time navigating crowds: scalable simulation and rendering

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
    Julien Pettré
    Abstract This paper introduces a framework for real-time simulation and rendering of crowds navigating in a virtual environment. The solution first consists in a specific environment preprocessing technique giving rise to navigation graphs, which are then used by the navigation and simulation tasks. Second, navigation planning interactively provides various solutions to the user queries, allowing to spread a crowd by individualizing trajectories. A scalable simulation model enables the management of large crowds, while saving computation time for rendering tasks. Pedestrian graphical models are divided into three rendering fidelities ranging from billboards to dynamic meshes, allowing close-up views of detailed digital actors with a large variety of locomotion animations. Examples illustrate our method in several environments with crowds of up to 35,000 pedestrians with real-time performance. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Improving realism of a surgery simulator: linear anisotropic elasticity, complex interactions and force extrapolation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3 2002
    Guillaume Picinbono
    Abstract In this article, we describe the latest developments of the minimally invasive hepatic surgery simulator prototype developed at INRIA. The goal of this simulator is to provide a realistic training test bed to perform laparoscopic procedures. Therefore, its main functionality is to simulate the action of virtual laparoscopic surgical instruments for deforming and cutting tridimensional anatomical models. Throughout this paper, we present the general features of this simulator including the implementation of several biomechanical models and the integration of two force-feedback devices in the simulation platform. More precisely, we describe three new important developments that improve the overall realism of our simulator. First, we have developed biomechanical models, based on linear elasticity and finite element theory, that include the notion of anisotropic deformation. Indeed, we have generalized the linear elastic behaviour of anatomical models to ,transversally isotropic' materials, i.e. materials having a different behaviour in a given direction. We have also added to the volumetric model an external elastic membrane representing the ,liver capsule', a rather stiff skin surrounding the liver, which creates a kind of ,surface anisotropy'. Second, we have developed new contact models between surgical instruments and soft tissue models. For instance, after detecting a contact with an instrument, we define specific boundary constraints on deformable models to represent various forms of interactions with a surgical tool, such as sliding, gripping, cutting or burning. In addition, we compute the reaction forces that should be felt by the user manipulating the force-feedback devices. The last improvement is related to the problem of haptic rendering. Currently, we are able to achieve a simulation frequency of 25,Hz (visual real time) with anatomical models of complex geometry and behaviour. But to achieve a good haptic feedback requires a frequency update of applied forces typically above 300,Hz (haptic real time). Thus, we propose a force extrapolation algorithm in order to reach haptic real time. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Realistic and efficient rendering of free-form knitwear

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2001
    Hua Zhong
    Abstract We present a method for rendering knitwear on free-form surfaces. This method has three main advantages. First, it renders yarn microstructure realistically and efficiently. Second, the rendering efficiency of yarn microstructure does not come at the price of ignoring the interactions between the neighboring yarn loops. Such interactions are modeled in our system to further enhance realism. Finally, our approach gives the user intuitive control on a few key aspects of knitwear appearance: the fluffiness of the yarn and the irregularity in the positioning of the yarn loops. The result is a system that efficiently produces highly realistic rendering of free-form knitwear with user control on key aspects of visual appearance. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Möbius Transformations For Global Intrinsic Symmetry Analysis

    COMPUTER GRAPHICS FORUM, Issue 5 2010
    Vladimir G. Kim
    The goal of our work is to develop an algorithm for automatic and robust detection of global intrinsic symmetries in 3D surface meshes. Our approach is based on two core observations. First, symmetry invariant point sets can be detected robustly using critical points of the Average Geodesic Distance (AGD) function. Second, intrinsic symmetries are self-isometries of surfaces and as such are contained in the low dimensional group of Möbius transformations. Based on these observations, we propose an algorithm that: 1) generates a set of symmetric points by detecting critical points of the AGD function, 2) enumerates small subsets of those feature points to generate candidate Möbius transformations, and 3) selects among those candidate Möbius transformations the one(s) that best map the surface onto itself. The main advantages of this algorithm stem from the stability of the AGD in predicting potential symmetric point features and the low dimensionality of the Möbius group for enumerating potential self-mappings. During experiments with a benchmark set of meshes augmented with human-specified symmetric correspondences, we find that the algorithm is able to find intrinsic symmetries for a wide variety of object types with moderate deviations from perfect symmetry. [source]


    Interactive Cover Design Considering Physical Constraints

    COMPUTER GRAPHICS FORUM, Issue 7 2009
    Yuki Igarashi
    Abstract We developed an interactive system to design a customized cover for a given three-dimensional (3D) object such as a camera, teapot, or car. The system first computes the convex hull of the input geometry. The user segments it into several cloth patches by drawing on the 3D surface. This paper provides two technical contributions. First, it introduces a specialized flattening algorithm for cover patches. It makes each two-dimensional edge in the flattened pattern equal to or longer than the original 3D edge; a smaller patch would fail to cover the object, and a larger patch would result in extra wrinkles. Second, it introduces a mechanism to verify that the user-specified opening would be large enough for the object to be removed. Starting with the initial configuration, the system virtually "pulls" the object out of the cover while avoiding excessive stretching of cloth patches. We used the system to design real covers and confirmed that it functions as intended. [source]


    Physically Guided Animation of Trees

    COMPUTER GRAPHICS FORUM, Issue 2 2009
    Ralf Habel
    Abstract This paper presents a new method to animate the interaction of a tree with wind both realistically and in real time. The main idea is to combine statistical observations with physical properties in two major parts of tree animation. First, the interaction of a single branch with the forces applied to it is approximated by a novel efficient two step nonlinear deformation method, allowing arbitrary continuous deformations and circumventing the need to segment a branch to model its deformation behavior. Second, the interaction of wind with the dynamic system representing a tree is statistically modeled. By precomputing the response function of branches to turbulent wind in frequency space, the motion of a branch can be synthesized efficiently by sampling a 2D motion texture. Using a hierarchical form of vertex displacement, both methods can be combined in a single vertex shader, fully leveraging the power of modern GPUs to realistically animate thousands of branches and ten thousands of leaves at practically no cost. [source]


    Applied Geometry:Discrete Differential Calculus for Graphics

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Mathieu Desbrun
    Geometry has been extensively studied for centuries, almost exclusively from a differential point of view. However, with the advent of the digital age, the interest directed to smooth surfaces has now partially shifted due to the growing importance of discrete geometry. From 3D surfaces in graphics to higher dimensional manifolds in mechanics, computational sciences must deal with sampled geometric data on a daily basis-hence our interest in Applied Geometry. In this talk we cover different aspects of Applied Geometry. First, we discuss the problem of Shape Approximation, where an initial surface is accurately discretized (i.e., remeshed) using anisotropic elements through error minimization. Second, once we have a discrete geometry to work with, we briefly show how to develop a full- blown discrete calculus on such discrete manifolds, allowing us to manipulate functions, vector fields, or even tensors while preserving the fundamental structures and invariants of the differential case. We will emphasize the applicability of our discrete variational approach to geometry by showing results on surface parameterization, smoothing, and remeshing, as well as virtual actors and thin-shell simulation. Joint work with: Pierre Alliez (INRIA), David Cohen-Steiner (Duke U.), Eitan Grinspun (NYU), Anil Hirani (Caltech), Jerrold E. Marsden (Caltech), Mark Meyer (Pixar), Fred Pighin (USC), Peter Schröder (Caltech), Yiying Tong (USC). [source]


    A Knowledge Formalization and Aggregation-Based Method for the Assessment of Dam Performance

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2010
    Corinne Curt
    The model's inputs are the whole set of available information and data: visual observations, monitoring measurements, calculated data, and documents related to design and construction processes. First, a formal grid is proposed to structure the inputs. It is composed of six fields: name, definition, scale, references as anchorage points on the scale, and spatial and temporal characteristics. Structured inputs are called indicators. Second, an indicator aggregation method is proposed that allows obtaining not only the dam performance but also the assessment of its design and construction practices. The methodology is illustrated mainly with the internal erosion mechanism through the embankment, but results concerning other failure modes are also provided. An application of the method for monitoring dams through time is given. [source]


    Reference-Free Damage Classification Based on Cluster Analysis

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2008
    Hoon Sohn
    The ultimate goal of this study was to develop an in-site non-destructive testing (NDT) technique that can continuously and autonomously inspect the bonding condition between a carbon FRP (CFRP) layer and a host reinforced concrete (RC) structure, when the CRFP layer is used for strengthening the RC structure. The uniqueness of this reference-free NDT is two-fold: First, features, which are sensitive to CFRP debonding but insensitive to operational and environmental variations of the structure, have been extracted only from current data without direct comparison with previously obtained baseline data. Second, damage classification is performed instantaneously without relying on predetermined decision boundaries. The extraction of the reference-free features is accomplished based on the concept of time reversal acoustics, and the instantaneous decision-making is achieved using cluster analysis. Monotonic and fatigue load tests of large-scale CFRP-strengthened RC beams are conducted to demonstrate the potential of the proposed reference-free debonding monitoring technique. Based on the experimental studies, it has been shown that the proposed reference-free NDT technique may minimize false alarms of debonding and unnecessary data interpretation by end users. [source]


    A Polymorphic Dynamic Network Loading Model

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008
    Nie Yu (Marco)
    The polymorphism, realized through a general node-link interface and proper discretization, offers several prominent advantages. First of all, PDNL allows road facilities in the same network to be represented by different traffic flow models based on the tradeoff of efficiency and realism and/or the characteristics of the targeted problem. Second, new macroscopic link/node models can be easily plugged into the framework and compared against existing ones. Third, PDNL decouples links and nodes in network loading, and thus opens the door to parallel computing. Finally, PDNL keeps track of individual vehicular quanta of arbitrary size, which makes it possible to replicate analytical loading results as closely as desired. PDNL, thus, offers an ideal platform for studying both analytical dynamic traffic assignment problems of different kinds and macroscopic traffic simulation. [source]


    Robust Transportation Network Design Under Demand Uncertainty

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2007
    Satish V. Ukkusuri
    The origin,destination trip matrices are taken as random variables with known probability distributions. Instead of finding optimal network design solutions for a given future scenario, we are concerned with solutions that are in some sense "good" for a variety of demand realizations. We introduce a definition of robustness accounting for the planner's required degree of robustness. We propose a formulation of the robust network design problem (RNDP) and develop a methodology based on genetic algorithm (GA) to solve the RNDP. The proposed model generates globally near-optimal network design solutions, f, based on the planner's input for robustness. The study makes two important contributions to the network design literature. First, robust network design solutions are significantly different from the deterministic NDPs and not accounting for them could potentially underestimate the network-wide impacts. Second, systematic evaluation of the performance of the model and solution algorithm is conducted on different test networks and budget levels to explore the efficacy of this approach. The results highlight the importance of accounting for robustness in transportation planning and the proposed approach is capable of producing high-quality solutions. [source]


    Life-Cycle Performance of RC Bridges: Probabilistic Approach

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2000
    Dimitri V. Val
    This article addresses the problem of reliability assessment of reinforced concrete (RC) bridges during their service life. First, a probabilistic model for assessment of time-dependent reliability of RC bridges is presented, with particular emphasis placed on deterioration of bridges due to corrosion of reinforcing steel. The model takes into account uncertainties associated with materials properties, bridge dimensions, loads, and corrosion initiation and propagation. Time-dependent reliabilities are considered for ultimate and serviceability limit states. Examples illustrate the application of the model. Second, updating of predictive probabilistic models using site-specific data is considered. Bayesian statistical theory that provides a mathematical basis for such updating is outlined briefly, and its implementation for the updating of information about bridge properties using inspection data is described in more detail. An example illustrates the effect of this updating on bridge reliability. [source]


    Toolkits for automatic web service and GUI generation: KWATT

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2010
    Yenan Qu
    In a previous paper, we explained how to translate an input script into a functional web service, independent of the script language. We extend this work by considering the automatic creation of graphical user interfaces to allow interaction between a user and the web service generated by KWATT. The key aspects of this work are three-fold. First, comment lines inserted into the script provide hints to the interface generator regarding the interface widgets. Second, the structure of the GUI is encoded into an XML file, and third, a plugin architecture permits the interface to be the output in one of several languages. We present an example interface to illustrate the concepts. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    The Neutralizer: a self-configurable failure detector for minimizing distributed storage maintenance cost

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2009
    Zhi Yang
    Abstract To achieve high data availability or reliability in an efficient manner, distributed storage systems must detect whether an observed node failure is permanent or transient, and if necessary, generate replicas to restore the desired level of replication. Given the unpredictability of network dynamics, however, distinguishing permanent and transient failures is extremely difficult. Though timeout-based detectors can be used to avoid mistaking transient failures as permanent failures, it is unknown how the timeout values should be selected to achieve a better tradeoff between detection latency and accuracy. In this paper, we address this fundamental tradeoff from several perspectives. First, we explore the impact of different timeout values on maintenance cost by examining the probability of their false positives and false negatives. Second, we propose a self-configurable failure detector called the Neutralizer based on the idea of counteracting false positives with false negatives. The Neutralizer could enable the system to maintain a desired replication level on average with the least amount of bandwidth. We conduct extensive simulations using real trace data from a widely deployed peer-to-peer system and synthetic traces based on PlanetLab and Microsoft PCs, showing a significant reduction in aggregate bandwidth usage after applying the Neutralizer (especially in an environment with a low average node availability). Overall, we demonstrate that the Neutralizer closely approximates the performance of a perfect ,oracle' detector in many cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Toward replication in grids for digital libraries with freshness and correctness guarantees

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2008
    Fuat Akal
    Abstract Building digital libraries (DLs) on top of data grids while facilitating data access and minimizing access overheads is challenging. To achieve this, replication in a Grid has to provide dedicated features that are only partly supported by existing Grid environments. First, it must provide transparent and consistent access to distributed data. Second, it must dynamically control the creation and maintenance of replicas. Third, it should allow higher replication granularities, i.e. beyond individual files. Fourth, users should be able to specify their freshness demands, i.e. whether they need most recent data or are satisfied with slightly outdated data. Finally, all these tasks must be performed efficiently. This paper presents an approach that will finally allow one to build a fully integrated and self-managing replication subsystem for data grids that will provide all the above features. Our approach is to start with an accepted replication protocol for database clusters, namely PDBREP, and to adapt it to the grid. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Experimental study on the extraction and distribution of textual domain keywords

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2008
    Xiangfeng Luo
    Abstract Domain keywords of text play a primary role in text classifying, clustering and personalized services. This paper proposes a term frequency inverse document frequency (TFIDF) based method called TDDF (TFIDF direct document frequency of domain) to extract domain keywords from multi-texts. First, we discuss the optimal parameters of TFIDF, which are used to extract textual keywords and domain keywords. Second, TDDF is proposed to extract domain keywords from multi-texts, which takes document frequency of domain into account. Finally, the distribution of domain keywords on scientific texts is studied. Experiments and applications show that TDDF is more effective than the optimal TFIDF in the extraction of domain keywords. Domain keywords accord with normal distribution on a single text after deleting the ubiquitous domain keywords. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Towards a framework and a benchmark for testing tools for multi-threaded programs

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2007
    Yaniv Eytani
    Abstract Multi-threaded code is becoming very common, both on the server side, and very recently for personal computers as well. Consequently, looking for intermittent bugs is a problem that is receiving more and more attention. As there is no silver bullet, research focuses on a variety of partial solutions. We outline a road map for combining the research within the different disciplines of testing multi-threaded programs and for evaluating the quality of this research. We have three main goals. First, to create a benchmark that can be used to evaluate different solutions. Second, to create a framework with open application programming interfaces that enables the combination of techniques in the multi-threading domain. Third, to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques can congregate. We have started creating such a benchmark and describe the lessons learned in the process. The framework will enable technology developers, for example, developers of race detection algorithms, to concentrate on their components and use other ready made components (e.g. an instrumentor) to create a testing solution. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Reputation-based semantic service discovery

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2006
    Ali Shaikh Ali
    Abstract An important component of Semantic Grid services is the support for dynamic service discovery. Dynamic service discovery requires the provision of rich and flexible metadata that is not supported by current registry services such as UDDI. We present a framework to facilitate reputation-based service selection in Semantic Grids. Our framework has two key features that distinguish it from other work in this area. First, we propose a dynamic, adaptive, and highly fault-tolerant reputation-aware service discovery algorithm. Second, we present a service-oriented distributed reputation assessment algorithm. In this paper, we describe the main components of our framework and report on our experience of developing the prototype. Copyright © 2005 John Wiley & Sons, Ltd. [source]