Standard Approach (standard + approach)

Distribution by Scientific Domains
Distribution within Medical Sciences


Selected Abstracts


Sub-Optimality of Income Statement-Based Methods for Measuring Operational Risk under Basel II: Empirical Evidence from Spanish Banks

FINANCIAL MARKETS, INSTITUTIONS & INSTRUMENTS, Issue 4 2007
Enrique Bonsón
The New Basel Capital Accord (Basel II) was created with the intention of establishing a framework in which financial entities can manage their risks in a more detailed and efficient way. Within this general reform movement, Operational Risk emerges as a fundamental variable. OR can be managed by three alternative methods: the Basic Indicator Approach, Standard Approach and Advanced Measurement Approach. The choice of which method to adopt has become of supreme interest for senior banking managers. This study analyzes the exactitude of the underlying implicit hypotheses that support each method, distinguishing between income statement based methods and the management accounting based method. In the present study the non-optimum character of the two Income Statement-based methods is empirically confirmed, in the light of the data provided by Spanish financial entities. [source]


A fractional adaptation law for sliding mode control

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2008
Mehmet Önder Efe
Abstract This paper presents a novel parameter tuning law that forces the emergence of a sliding motion in the behavior of a multi-input multi-output nonlinear dynamic system. Adaptive linear elements are used as controllers. Standard approach to parameter adjustment employs integer order derivative or integration operators. In this paper, the use of fractional differentiation or integration operators for the performance improvement of adaptive sliding mode control systems is presented. Hitting in finite time is proved and the associated conditions with numerical justifications are given. The proposed technique has been assessed through a set of simulations considering the dynamic model of a two degrees of freedom direct drive robot. It is seen that the control system with the proposed adaptation scheme provides (i) better tracking performance, (ii) suppression of undesired drifts in parameter evolution, (iii) a very high degree of robustness and improved insensitivity to disturbances and (iv) removal of the controller initialization problem. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Diffraction imaging in depth

GEOPHYSICAL PROSPECTING, Issue 5 2008
T.J. Moser
ABSTRACT High resolution imaging is of great value to an interpreter, for instance to enable identification of small scale faults, and to locate formation pinch-out positions. Standard approaches to obtain high-resolution information, such as coherency analysis and structure-oriented filters, derive attributes from stacked, migrated images. Since they are image-driven, these techniques are sensitive to artifacts due to an inadequate migration velocity; in fact the attribute derivation is not based on the physics of wave propagation. Diffracted waves on the other hand have been recognized as physically reliable carriers of high- or even super-resolution structural information. However, high-resolution information, encoded in diffractions, is generally lost during the conventional processing sequence, indeed migration kernels in current migration algorithms are biased against diffractions. We propose here methods for a diffraction-based, data-oriented approach to image resolution. We also demonstrate the different behaviour of diffractions compared to specular reflections and how this can be leveraged to assess characteristics of subsurface features. In this way a rough surface such as a fault plane or unconformity may be distinguishable on a diffraction image and not on a traditional reflection image. We outline some characteristic properties of diffractions and diffraction imaging, and present two novel approaches to diffraction imaging in the depth domain. The first technique is based on reflection focusing in the depth domain and subsequent filtering of reflections from prestack data. The second technique modifies the migration kernel and consists of a reverse application of stationary-phase migration to suppress contributions from specular reflections to the diffraction image. Both techniques are proposed as a complement to conventional full-wave pre-stack depth migration, and both assume the existence of an accurate migration velocity. [source]


ProcDef: Local-to-global Deformation for Skeleton-free Character Animation

COMPUTER GRAPHICS FORUM, Issue 7 2009
Takashi Ijiri
Abstract Animations of characters with flexible bodies such as jellyfish, snails, and, hearts are difficult to design using traditional skeleton-based approaches. A standard approach is keyframing, but adjusting the shape of the flexible body for each key frame is tedious. In addition, the character cannot dynamically adjust its motion to respond to the environment or user input. This paper introduces a new procedural deformation framework (ProcDef) for designing and driving animations of such flexible objects. Our approach is to synthesize global motions procedurally by integrating local deformations. ProcDef provides an efficient design scheme for local deformation patterns; the user can control the orientation and magnitude of local deformations as well as the propagation of deformation signals by specifying line charts and volumetric fields. We also present a fast and robust deformation algorithm based on shape-matching dynamics and show some example animations to illustrate the feasibility of our framework. [source]


Thalidomide for the treatment of multiple myeloma

CONGENITAL ANOMALIES, Issue 3 2004
Yutaka Hattori
ABSTRACT Although thalidomide was withdrawn in the 1960s after its teratogenic property was recognized, it was subsequently found that this drug possesses immunomodulatory and anti-inflammatory effects. Recent studies have also demonstrated that thalidomide has antineoplastic activity via an antiangiogenic mechanism. Observations in the late 1990s that the microenvironment in the bone marrow plays a role in tumor progression in multiple myeloma provided an impetus to use thalidomide for the treatment of this disease. It is known that thalidomide monotherapy is effective in one-third of refractory cases, and in combination with glucocorticoids and/or antineoplastic drugs, thalidomide provides a response rate of more than 50%. Thus, thalidomide therapy is considered a standard approach for the treatment of relapsed and refractory myeloma. The exact mechanism of the antimyeloma effect of thalidomide is not yet clearly understood. Anti-angiogenic effects, direct activity in tumor cells such as the induction of apoptosis or G1 arrest of the cell cycle, the inhibition of growth factor production, the regulation of interactions between tumor and stromal cells, and the modulation of tumor immunity have been considered as possible mechanisms. In addition to its teratogenicity, the adverse effects of thalidomide have been general symptoms such as somnolence and headache, peripheral neuropathy, constipation, skin rash, and other symptoms. Although these adverse effects are generally reversible and mild, grade 3 and 4 toxicities such as peripheral neuropathy, deep venous thrombosis, neutropenia, and toxic dermal necrosis have occasionally been reported. The application of thalidomide therapy in patients with multiple myeloma is being broadened to include not only cases of refractory myeloma, but also previously untreated cases, as well as for maintenance therapy after hematopoietic stem cell transplantation and for the treatment of other hematological diseases. The safe use of this drug will depend on the establishment of diagnostic and treatment guidelines. In addition, the establishment of a nation-wide regulation system is urgently needed in Japan. [source]


Correlation at First Sight

ECONOMIC NOTES, Issue 2 2005
Andrew Friend
The synthetic collateralized debt obligation (CDO) market has, over the last year, seen a significant increase in liquidity and transparency. The availability of published prices such as TracX and iBoxx tranches permits the calibration of model parameters, which was not achievable a year ago. This paper details what we believe has become the market standard approach in CDO valuation. The valuation model is introduced and analysed in depth to develop a better practical understanding of its use and the implications of parameter selection and calibration. In particular, we examine the idea that correlation within a copula model can be seen to be an equivalent measure to volatility in a standard B&S option framework and, correspondingly, we seek to calibrate smile and skew. [source]


Logistic Population Growth in the World's Largest Cities

GEOGRAPHICAL ANALYSIS, Issue 4 2006
Gordon F. Mulligan
This article demonstrates that recent population growth in the world's largest cities has conformed to the general parameters of the logistic process. Using data recently provided by the United Nations, logistic population growth for 485 million-person cities is analyzed at 5-year intervals during 1950,2010, with the UN projections for 2015 adopted as upper limits. A series of ordinary least-squares regression models of increasing complexity are estimated on the pooled data. In one class of models, the logarithms of population proportions are specified to be linear in time, which is the standard approach, but in a second class of models those proportions are specified as being quadratic. The most complex models control logistic growth estimates for (i) city-specific effects (e.g., initial population), (ii) nation-specific effects (e.g., economic development, age distribution of population), and (iii) global coordinates (for unobserved effects). Moreover, the results are segregated according to each city's membership in four different growth clubs, which was an important finding of previous research. [source]


A correlation-based misfit criterion for wave-equation traveltime tomography

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
T. Van Leeuwen
SUMMARY Wave-equation traveltime tomography tries to obtain a subsurface velocity model from seismic data, either passive or active, that explains their traveltimes. A key step is the extraction of traveltime differences, or relative phase shifts, between observed and modelled finite-frequency waveforms. A standard approach involves a correlation of the observed and measured waveforms. When the amplitude spectra of the waveforms are identical, the maximum of the correlation is indicative of the relative phase shift. When the amplitude spectra are not identical, however, this argument is no longer valid. We propose an alternative criterion to measure the relative phase shift. This misfit criterion is a weighted norm of the correlation and is less sensitive to differences in the amplitude spectra. For practical application it is important to use a sensitivity kernel that is consistent with the way the misfit is measured. We derive this sensitivity kernel and show how it differs from the standard banana,doughnut sensitivity kernel. We illustrate the approach on a cross-well data set. [source]


Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigations

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
E. Kozlovskaya
SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source]


Velocity/interface model building in a thrust belt by tomographic inversion of global offset seismic data

GEOPHYSICAL PROSPECTING, Issue 1 2003
P. Dell'Aversana
Between September and November 1999, two test seismic lines were recorded in the southern Apennine region of southern Italy using the global offset technique, which involves the acquisition of a wide offset range using two simultaneously active seismic spreads. One consisted of a symmetrical spread moving along the line, with geophone arrays every 30 m and a maximum offset of 3.6 km. The other one consisted of fixed geophone arrays every 90 m with a maximum offset of 18 km. This experimental acquisition project was carried out as part of the enhanced seismic in thrust belt (ESIT) research project, funded by the European Union, Enterprise Oil and Eni-Agip. An iterative and interactive tomographic inversion of refraction/reflection arrivals was carried out on the data from line ESIT700 to produce a velocity/interface model in depth, which used all the available offsets. The tomographic models allowed the reconstruction of layer interface geometries and interval velocities for the target carbonate platform (Apula) and the overburden sequence. The value of this technique is highlighted by the fact that the standard approach, based on near-vertical reflection seismic and a conventional processing flow, produced poor seismic images in both stack and migrated sections. [source]


Proposal of a standard approach to dental extraction in haemophilia patients.

HAEMOPHILIA, Issue 5 2000
A case-control study with good results
We found no case,control studies on dental extraction in haemophilia patients in the literature even though the use of antifibrinolytic agents following a single infusion of factor VIII or IX has been accompanied by a lower number of bleeding complications in dental extractions. In this study we verified the incidence of bleeding complications after dental extraction in a group of 77 haemophilia patients. One hundred and eighty-four male patients requiring dental extraction represented the control group. All haemophilia patients received 20 mg kg,1 of tranexamic acid and a single infusion of factor VIII or IX to achieve a peak level about 30% of factor VIII or IX in vivo prior to dental extraction. Forty-five of 98 (45.9%) dental extractions in haemophilia patients and 110 of 239 (46%) dental extractions in the control group were surgical ones. We registered two bleeding complications in the group of haemophilia patients (one late bleeding and one haematoma in the site of the anaesthetic injection) and one (a late bleeding) in the control group. The difference of bleeding complications in the two groups of patients were not statistically significant (P=0.2; OR 0.2; CI 0.01,2.22). The protocol proposed in this study, characterized by the feasibility and the number of haemorrhagic complications not different from normal population, make dental extractions in haemophilia patients possible on an out-patient basis with a cost reduction for the community and minor discomfort for the patients. [source]


SCREENING ETHICS WHEN HONEST AGENTS CARE ABOUT FAIRNESS*

INTERNATIONAL ECONOMIC REVIEW, Issue 1 2006
Ingela Alger
A principal faces an agent with private information who is either honest or dishonest. Honesty involves revealing private information truthfully if the probability that the equilibrium allocation chosen by an agent who lies is small enough. Even the slightest intolerance for lying prevents full ethics screening whereby the agent is given proper incentives if dishonest and zero rent if honest. Still, some partial ethics screening may allow for taking advantage of the potential honesty of the agent, even if honesty is unlikely. If intolerance for lying is strong, the standard approach that assumes a fully opportunistic agent is robust. [source]


Improved inter-modality image registration using normalized mutual information with coarse-binned histograms

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2009
Haewon Nam
Abstract In this paper we extend the method of inter-modality image registration using the maximization of normalized mutual information (NMI) for the registration of [18F]-2-fluoro-deoxy-D-glucose (FDG)-positron emission tomography (PET) with T1-weighted magnetic resonance (MR) volumes. We investigate the impact on the NMI maximization with respect to using coarse-to-fine grained B-spline bases and to the number of bins required for the voxel intensity histograms of each volume. Our results demonstrate that the efficiency and accuracy of elastic, as well as rigid body, registration is improved both through the use of a reduced number of bins in the PET and MR histograms, and of a limited coarse-to-fine grain interpolation of the volume data. To determine the appropriate number of bins prior to registration, we consider the NMI between the two volumes, the mutual information content of the two volumes, as a function of the binning of each volume. Simulated data sets are used for validation and the registration improves that obtained with a standard approach based on the Statistical Parametric Mapping software. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Eradication of established renal cell carcinoma by a combination of 5-fluorouracil and anti-4-1BB monoclonal antibody in mice

INTERNATIONAL JOURNAL OF CANCER, Issue 12 2008
Seong-A Ju
Abstract Renal cell carcinoma (RCC), one of the most incurable malignancies, is highly resistant to chemotherapy and radiotherapy. Cytokine immunotherapy has been the standard approach, but the overall response rate is still very low. Administration of agonistic anti-4-1BB monoclonal antibody (mAb) has been shown to induce regression of several animal tumors but its effect on RCC is unknown. We show here that monotherapy with either anti-4-1BB mAb or the cytotoxic drug, 5-fluorouracil (5-FU), has little effect on established RCC, Renca tumors, but combination therapy with anti-4-1BB mAb and 5-FU eradicates the tumors in more than 70 % of mice. The regressing tumor tissues from mice receiving the combination therapy contained more apoptotic tumor cells and tumor infiltrating lymphocytes than tumor tissues from mice receiving 5-FU or anti-4-1BB mAb monotherapy. The number of lymphocytes in the spleens and tumor- draining lymph nodes (TDLNs) of the combination therapy mice was greatly increased compared to that of control or 5-FU monotherapy mice. Mice that had recovered due to the combination therapy rapidly rejected rechallenge with the tumor, pointing to the establishment of long-lasting tumor-specific memory. Our results indicate that targeting tumors with 5-FU, and immune cells with 4-1BB stimulation, could be a useful strategy for treating incurable RCC. © 2008 Wiley-Liss, Inc. [source]


Traffic locality characteristics in a parallel forwarding system

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2003
W. Shi
Abstract Due to the widening gap between the performance of microprocessors and that of memory, using caches in a system to take advantage of locality in its workload has become a standard approach to improve overall system performance. At the same time, many performance problems finally reduce to cache performance issues. Locality in system workload is the fact that makes caching possible. In this paper, we first use the reuse distance model to characterize temporal locality in Internet traffic. We develop a model that closely matches the empirical data. We then extend the work to investigate temporal locality in the workload of multi-processor forwarding systems by comparing locality under different packet scheduling schemes. Our simulations show that for systems with hash-based schedulers, caching can be an effective way to improve forwarding performance. Based on flow-level traffic characteristics, we further discuss the relationship between load-balancing and hash-scheduling, which yields insights into system design. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Model density approach to the Kohn,Sham problem: Efficient extension of the density fitting technique

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 5 2005
Uwe Birkenheuer
Abstract We present a novel procedure for treating the exchange-correlation contributions in the Kohn,Sham procedure. The approach proposed is fully variational and closely related to the so-called "fitting functions" method for the Coulomb Hartree problem; in fact, the method consistently uses this auxiliary representation of the electron density to determine the exchange-correlation contributions. The exchange-correlation potential and its matrix elements in a basis set of localized (atomic) orbitals can be evaluated by reusing the three-center Coulomb integrals involving fitting functions, while the computational cost of the remaining numerical integration is significantly reduced and scales only linearly with the size of the auxiliary basis. We tested the approach extensively for a large set of atoms and small molecules as well as for transition-metal carbonyls and clusters, by comparing total energies, atomization energies, structure parameters, and vibrational frequencies at the local density approximation and generalized gradient approximation levels of theory. The method requires a sufficiently flexible auxiliary basis set. We propose a minimal extension of the conventional auxiliary basis set, which yields essentially the same accuracy for the quantities just mentioned as the standard approach. The new method allows one to achieve substantial savings compared with a fully numerical integration of the exchange-correlation contributions. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 [source]


Procedural pain of an ultrasound-guided brachial plexus block: a comparison of axillary and infraclavicular approaches

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 4 2010
B. S. FREDERIKSEN
Background: Ultrasound (US)-guided infraclavicular (IC) and axillary (AX) blocks have similar effectiveness. Therefore, limiting procedural pain may help to choose a standard approach. The primary aims of this randomized study were to assess patient's pain during the block and to recognize its cause. Methods: Eighty patients were randomly allocated to the IC or the AX group. A blinded investigator asked the patients to quantify block pain on a Visual Analogue Scale (VAS 0,100) and to indicate the most unpleasant component (needle passes, paraesthesie or local anaesthetics injection). Sensory block was assessed every 10 min. After 30 min, the unblocked nerves were supplemented. Patients were ready for surgery when they had analgesia or anaesthesia of the five nerves distal to the elbow. Preliminary scan time, block performance and latency times, readiness for surgery, adverse events and patient's acceptance were recorded. Results: The axillary approach resulted in lower maximum VAS scores (median 12) than the infraclavicular approach (median 21). This difference was not statistically significant (P=0.07). Numbers of patients indicating the most painful component were similar in both groups. Patients in either group were ready for surgery after 25 min. Two patients in the IC group and seven in the AX group needed block supplementation (n.s.). Block performance times and number of needle passes were significantly lower in the IC group. Patients' acceptance was 98% in both groups. Conclusions: We did not find significant differences between the two approaches in procedural pain and patient's acceptance. The choice of approach may depend on the anaesthesiologist's experience and the patient's preferences. [source]


Use of a Long Preshaped Sheath to Facilitate Cannulation of the Coronary Sinus at Electrophysiologic Study

JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 12 2001
CHRIS B. PEPPER B.Sc.
CS Cannulation Using a Long Sheath. Introduction: Catheterization of the coronary sinus (CS) from the femoral vein can be challenging. We tested whether use of a long preshaped sheath facilitates CS cannulation. Methods and Results: One hundred four patients were randomized into two phases. In phase 1, consecutive patients were allocated to CS catheterization using the long sheath (n = 26) or standard 7-French 15-cm sheath (n = 25). If unsuccessful within 10 minutes, the alternative technique was used. Phase 2 assessed the utility of the long sheath in difficult cases. All patients initially were approached using the standard sheath. If cannulation failed after 10 minutes, patients were randomly allocated to the standard or long sheath approach. In phase 1, the standard approach failed in 4 (16%) of 25 cases. In each case, a long sheath proved successful (mean 3.2 min). The long sheath approach was successful within 10 minutes in all 26 cases. Catheter deployment was significantly quicker with the long sheath, but this was offset by the time required for sheath insertion. In phase 2, the standard approach was successful in 46 (87%) of 53 cases. Of 7 "failures," 3 were randomized to continue the standard approach, which was successful in 1; 4 were randomized to the long sheath approach, and success was achieved in all (mean 4.4 ± 1.5 min). Overall, the CS could not be promptly catheterized in 15% of cases within 10 minutes using the standard sheath, and no failures were seen using the long sheath. No complications arose from the use of either technique. Conclusion: The long sheath was uniformly successful in permitting catheterization of the CS from the femoral approach in both unselected and difficult cases. [source]


TDDFT investigation on nucleic acid bases: Comparison with experiments and standard approach

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 5 2004
M.K. Shukla
Abstract A comprehensive theoretical study of electronic transitions of canonical nucleic acid bases, namely guanine, adenine, cytosine, uracil, and thymine, was performed. Ground state geometries were optimized at the MP2/6-311G(d,p) level. The nature of respective potential energy surfaces was determined using the harmonic vibrational frequency analysis. The MP2 optimized geometries were used to compute electronic vertical singlet transition energies at the time-dependent density functional theory (TDDFT) level using the B3LYP functional. The 6-311++G(d,p), 6-311(2+,2+)G(d,p), 6-311(3+,3+)G(df,pd), and 6-311(5+,5+)G(df,pd) basis sets were used for the transition energy calculations. Computed transition energies were found in good agreement with the corresponding experimental data. However, in higher transitions, the Rydberg contaminations were also obtained. The existence of ,,* type Rydberg transition was found near the lowest singlet ,,* state of all bases, which may be responsible for the ultrafast deactivation process in nucleic acid bases. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 768,778, 2004 [source]


Daily volatility forecasts: reassessing the performance of GARCH models

JOURNAL OF FORECASTING, Issue 6 2004
David G. McMillan
Abstract Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in-sample, they appear to provide relatively poor out-of-sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ,true volatility' measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ,true volatility' includes a large noisy component. An alternative measure for ,true volatility' has therefore been suggested, based upon the cumulative squared returns from intra-day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Review of Interventional Repair for Abdominal Aortic Aneurysm

JOURNAL OF INTERVENTIONAL CARDIOLOGY, Issue 6 2006
F.A.C.C., F.A.C.P., F.A.S.A., MAJED CHANE M.D.
Abdominal aortic aneurysm is associated with high mortality rate. For over 50 years, open surgical repair was the standard approach for large aneurysms. However, over the past decade, endovascular aneurysm repair (EVAR) has emerged as a viable alternative. EVAR is associated with lower operative and short-term morbidity and mortality and similar long-term survival (up to 4 years) compared with surgical repair. Endoleak remains a significant limitation associated with aneurysm expansion and reintervention. With newer, more versatile endograft designs, improvements in durability, and better surveillance techniques, the utilization of EVAR is likely to continue to expand. [source]


Basic Rules of Dosimetry in Endovascular Brachytherapy

JOURNAL OF INTERVENTIONAL CARDIOLOGY, Issue 6 2000
PHILIPPE A. COUCKE M.D.
Endovascular brachytherapy after percutaneous coronary intervention (PCI), is becoming a standard approach for the treatment and prevention of restenosis. A variety of technical approaches are currently available to deliver ionizing irradiation to the vascular target. Basically two kinds of radioactive isotopes are available that emit gamma radiation (photons) or beta radiation (electrons). The pitfalls and solutions for the optimization of dosimetry are discussed. As might be expected, the inhomogeneous dose distribution across the target volume results in recurrence by underdosage or in complications because of overdosage. Moreover, uniformization of the target definition and reporting of the dose distribution in endovascular brachytherapy is a prerequisite for comparison between the results of the various clinical trials and is absolutely necessary to improve the therapeutic efficacy of this new approach in the prevention of restenosis after coronary angioplasty with or without stenting. [source]


Multiplexed concentration quantification using isotopic surface-enhanced resonance Raman scattering

JOURNAL OF RAMAN SPECTROSCOPY, Issue 7 2010
Pradeep N. Perera
Abstract The recently developed isotopically edited internal standard approach for surface-enhanced resonance Raman scattering (SERRS) based chemical quantification is extended to demonstrate multiplexed detection of four different isotopic variants of a single chromophore. More specifically, it is shown that rhodamine-6G (R6G) with 0, 2, 4, or 6 deuterium substitutions may be reliably quantified in either two- or three-component mixtures. Thus, one isotopic species of known concentration may be used as an internal standard to determine the concentrations of two other isotopic components in a mixture. The concentrations of isotopic R6G SERRS chromophores are determined using partial least squares calibration and shown to yield a predictive accuracy of about ± 10% of the total R6G concentration (over 1,50 nM concentration range). These results set the stage for the use of such isotopic variants as tags for the SERRS/SERS quantitation of mixtures containing proteins, peptides, and other compounds. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Conditional Gaussian mixture modelling for dietary pattern analysis

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
Michael T. Fahey
Summary., Free-living individuals have multifaceted diets and consume foods in numerous combinations. In epidemiological studies it is desirable to characterize individual diets not only in terms of the quantity of individual dietary components but also in terms of dietary patterns. We describe the conditional Gaussian mixture model for dietary pattern analysis and show how it can be adapted to take account of important characteristics of self-reported dietary data. We illustrate this approach with an analysis of the 2000,2001 National Diet and Nutrition Survey of adults. The results strongly favoured a mixture model solution allowing clusters to vary in shape and size, over the standard approach that has been used previously to find dietary patterns. [source]


Constructive and Classical Models for Results in Economics and Game Theory

METROECONOMICA, Issue 2-3 2004
Kislaya Prasad
ABSTRACT A standard approach in economic theory is to use a formal language to prove results about an economy or a game. In this paper, model theory is used to examine interpretations of such results. The particular focus is on constructive theorems, since results established by constructive methods are valid for many different interpretations, whereas classical theorems are valid more narrowly. I discuss why non-classical models may be of interest and also describe applications of model theory to economics in classical contexts, e.g. non-standard analysis. The paper advocates a viewpoint suggesting that constructive models are tools for studying worlds in which agents' knowledge of the world is incomplete. [source]


Options-Based Multi-Objective Evaluation of Product Platforms

NAVAL ENGINEERS JOURNAL, Issue 3 2007
JAVIER P. GONZALEZ-ZUGASTI
A platform is the set of elements and interfaces that are common to a family of products. Design teams must choose among feasible platform concepts upon which a product family could be based, often involving new technologies. Multiple performance objectives need to be considered. A standard approach is to convert the performance outcomes into financial figures, which can then be weighed against the required investments. However, it is not always possible to transform performance outcomes (benefits) into monetary terms, such as in defense or highly technical projects. A multi-objective form of real-options-based platform selection is developed. Systems are compared based on multiple technical and economic goals, incorporating uncertainty by representing the unknown factors during the subsequent development process with probability distributions. The range of uncertain outcomes is integrated into single expected measures of effectiveness, which can then be applied to select the most appropriate platform and set of support product variants. An application to the design of platform-based families of naval high-speed ships is shown. [source]


Hepatitis B vaccination in haemodialysis patients: A randomized clinical trial

NEPHROLOGY, Issue 3 2009
MARILENE BOCK
SUMMARY Aim: A short vaccination protocol against hepatitis B was compared to the standard approach in patients under haemodialysis who were primarily non-responsive to the vaccine. Methods: This randomized, controlled open trial included 51 chronic haemodialysis subjects previously vaccinated against hepatitis B and with anti-HBs levels of less than 10 IU/mol/L. Twenty-six patients received 20 µg i.m. once a week for 8 weeks (short protocol) and 25 subjects three doses of 40 µg i.m. at months 0, 1 and 6 (standard protocol). Clinical and laboratory data were compared between responders and non-responders. A logistic regression model included selected parameters to assess risk factors for non-seroconversion. Results: Seroconversion rates to vaccine at 2 months were 80% and 78% in the short and standard protocol groups, respectively (P = 0.99). Median of anti-HBs levels were similar up to 6 months of follow up, but patients in the standard protocol showed a trend to higher anti-HBs in month 3 and a more steady decline in antibody titres. Non-responders were older, had longer duration of dialysis and a higher prevalence of a prior renal transplant and hepatitis C. In multivariate analysis, only advanced age and hepatitis C remained independently associated with non-responsiveness to vaccination. Conclusion: In haemodialysis patients, a short vaccination protocol against hepatitis B did not provide any benefit compared to the standard approach with respect to peak anti-HBs titres or a higher rate of seroprotection at the end of follow up. Other strategies to increase seroconversion rates should be explored, especially in the elderly and in patients with hepatitis C. [source]


As mental health nursing roles expand, is education expanding mental health nurses? an emotionally intelligent view towards preparation for psychological therapies and relatedness

NURSING INQUIRY, Issue 3 2008
John Hurley
As mental health nursing roles expand, is education expanding mental health nurses? an emotionally intelligent view towards preparation for psychological therapies and relatedness Mental health nurses (MHN) in the UK currently occupy a challenging position. This positioning is one that offers a view of expanding roles and responsibilities in both mental health act legislation and the delivery of psychological therapies, while simultaneously generic pre-registration training is being considered. Clearly, the view from this position, although not without challenge and internal discipline dispute, can also offer growing professional prestige, influence and respect from other health disciplines, as well as the wider public. Conversely, if the training, education and strategic enactment for new MHN roles is formulated and delivered from predominantly non-MHN axiomatic and epistemological stances, MHN identity can be seriously and potentially permanently diminished. This paper offers the construct of emotional intelligence as a framework to respond to these future challenges through making individual MHN enablement a primacy. This enablement of MHNs through enhanced emotional intelligence competencies is argued as requiring priority over the standard approach of enhancing strategies alone. [source]


Using distributed optimal control in economics: A numerical approach based on the finite element method

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 5-6 2001
Elena Calvo Calzada
Abstract The finite element method is employed to transform a distributed optimal control problem into a discrete optimization problem which can be solved with standard mathematical techniques. We use distributed optimal control in order to determine the optimal selective logging regime of a privately owned non-homogeneous forest. This regime has become nowadays a standard approach for the management of public forest. However, resource economists still have not come forward yet with an economic model for this widely applied management technique and base their economic analysis of the optimal management of a homogeneous forest on a clear cutting regime utilizing the Faustmann,Pressler,Ohlin Theorem. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Cut-off value of red-blood-cell-bound IgG for the diagnosis of Coombs-negative autoimmune hemolytic anemia,

AMERICAN JOURNAL OF HEMATOLOGY, Issue 2 2009
Toyomi Kamesaki
Direct antiglobulin test (DAT)-negative autoimmune hemolytic anemia (Coombs-negative AIHA) is characterized by laboratory evidence of in vivo hemolysis, together with a negative DAT performed by conventional tube technique (CTT) in clinically suspected AIHA patients. The immunoradiometric assay (IRMA) for red-blood-cell-bound immunoglobulin G (RBC-IgG) can be used to diagnose patients in whom CTT does not detect low levels of red cell autoantibodies. We investigated the diagnostic cutoff value of the IRMA for RBC-IgG in Coombs-negative AIHA and calculated its sensitivity and specificity. Of the 140 patients with negative DAT by CTT referred to our laboratory with undiagnosed hemolytic anemia, AIHA was clinically diagnosed in 64 patients (Coombs-negative AIHA). The numbers of Coombs-negative AIHA and non-AIHA patients changed with age and gender. The cutoff values were determined from receiver operating characteristic (ROC) curve according to age and gender. The IRMA for RBC-IgG proved to be sensitive (71.4%) and specific (87.8%) when using these cutoffs. Using these cutoffs for 41 patients with negative DAT referred to our laboratory in 2006, all the pseudonegative cases were treated with steroids before the test. The 31 untreated cases could be grouped using one cutoff value of 78.5 and showed 100% sensitivity and 94% specificity, independent of gender and age. Results indicate that RBC-IgG could become a standard approach for the diagnosis of Coombs-negative AIHA, when measured before treatment. Am. J. Hematol., 2009. © 2008 Wiley-Liss, Inc. [source]