Methods Fail (methods + fail)

Distribution by Scientific Domains


Selected Abstracts


Critical Evaluation of How the Rosgen Classification and Associated "Natural Channel Design" Methods Fail to Integrate and Quantify Fluvial Processes and Channel Response,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 5 2007
A. Simon
Abstract:, Over the past 10 years the Rosgen classification system and its associated methods of "natural channel design" have become synonymous to some with the term "stream restoration" and the science of fluvial geomorphology. Since the mid 1990s, this classification approach has become widely adopted by governmental agencies, particularly those funding restoration projects. The purposes of this article are to present a critical review, highlight inconsistencies and identify technical problems of Rosgen's "natural channel design" approach to stream restoration. This paper's primary thesis is that alluvial streams are open systems that adjust to altered inputs of energy and materials, and that a form-based system largely ignores this critical component. Problems with the use of the classification are encountered with identifying bankfull dimensions, particularly in incising channels and with the mixing of bed and bank sediment into a single population. Its use for engineering design and restoration may be flawed by ignoring some processes governed by force and resistance, and the imbalance between sediment supply and transporting power in unstable systems. An example of how C5 channels composed of different bank sediments adjust differently and to different equilibrium morphologies in response to an identical disturbance is shown. This contradicts the fundamental underpinning of "natural channel design" and the "reference-reach approach." The Rosgen classification is probably best applied as a communication tool to describe channel form but, in combination with "natural channel design" techniques, are not diagnostic of how to mitigate channel instability or predict equilibrium morphologies. For this, physically based, mechanistic approaches that rely on quantifying the driving and resisting forces that control active processes and ultimate channel morphology are better suited as the physics of erosion, transport, and deposition are the same regardless of the hydro-physiographic province or stream type because of the uniformity of physical laws. [source]


Dense array EEG: Methodology and new hypothesis on epilepsy syndromes

EPILEPSIA, Issue 2008
Mark D. Holmes
Summary Dense array EEG is a method of recording electroencephalography (EEG) with many more electrodes (up to 256) than is utilized with standard techniques that typically employ 19,21 scalp electrodes. The rationale for this approach is to enhance the spatial resolution of scalp EEG. In our research, dense array EEG is used in conjunction with a realistic model of head tissue conductivity and methods of electrographic source analysis to determine cerebral cortical localization of epileptiform discharges. In studies of patients with absence seizures, only localized cortical regions are involved during the attack. Typically, absences are accompanied by "wave,spike" complexes that show, both at the beginning and throughout the ictus, repetitive cycles of stereotyped, localized involvement of mainly mesial and orbital frontal cortex. Dense array EEG can also be used for long-term EEG video monitoring (LTM). We have used dense array EEG LTM to capture seizures in over 40 patients with medically refractory localization-related epilepsy, including both temporal and extra temporal cases, where standard LTM failed to reveal reliable ictal localization. One research goal is to test the validity of dense array LTM findings by comparison with invasive LTM and surgical outcome. Collection of a prospective series of surgical candidates who undergo both procedures is currently underway. Analysis of subjects with either generalized or localization-related seizures suggest that all seizures, including those traditionally classified as "generalized," propagate through discrete cortical networks. Furthermore, based on initial review of propagation patterns, we hypothesize that all epileptic seizures may be fundamentally corticothalamic or corticolimbic in nature. Dense array EEG may prove useful in noninvasive ictal localization, when standard methods fail. Future research will determine if the method will reduce the need for invasive EEG recordings, or assist in the appropriate placement of novel treatment devices. [source]


Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigations

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
E. Kozlovskaya
SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source]


Selecting significant factors by the noise addition method in principal component analysis

JOURNAL OF CHEMOMETRICS, Issue 7 2001
Brian K. Dable
Abstract The noise addition method (NAM) is presented as a tool for determining the number of significant factors in a data set. The NAM is compared to residual standard deviation (RSD), the factor indicator function (IND), chi-squared (,2) and cross-validation (CV) for establishing the number of significant factors in three data sets. The comparison and validation of the NAM are performed through Monte Carlo simulations with noise distributions of varying standard deviation, HPLC/UV-vis chromatographs of a mixture of aromatic hydrocarbons, and FIA of methyl orange. The NAM succeeds in correctly identifying the proper number of significant factors 98% of the time with the simulated data, 99% in the HPLC data sets and 98% with the FIA data. RSD and ,2 fail to choose the proper number of factors in all three data sets. IND identifies the correct number of factors in the simulated data sets but fails with the HPLC and FIA data sets. Both CV methods fail in the HPLC and FIA data sets. CV also fails for the simulated data sets, while the modified CV correctly chooses the proper number of factors an average of 80% of the time. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Forecasting market impact costs and identifying expensive trades

JOURNAL OF FORECASTING, Issue 1 2008
Jacob A. Bikker
Abstract Often, a relatively small group of trades causes the major part of the trading costs on an investment portfolio. Consequently, reducing the trading costs of comparatively few expensive trades would already result in substantial savings on total trading costs. Since trading costs depend to some extent on steering variables, investors can try to lower trading costs by carefully controlling these factors. As a first step in this direction, this paper focuses on the identification of expensive trades before actual trading takes place. However, forecasting market impact costs appears notoriously difficult and traditional methods fail. Therefore, we propose two alternative methods to form expectations about future trading costs. Applied to the equity trades of the world's second largest pension fund, both methods succeed in filtering out a considerable number of trades with high trading costs and substantially outperform no-skill prediction methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A feed forward method for stabilizing the gain and output power of an erbium-doped fiber amplifier

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 9 2009
N. Vijayakumar
Abstract The gain and the output power of an erbium-doped fiber amplifier (EDFA) are found to be sensitive to input power level fluctuations. Many feed back methods have been proposed to stabilize the gain of an EDFA. These feed back methods are based on sampling and detecting output light and using it to control the pump laser. Such methods fail when the amplifier is expected to repeat a multi-gigabit data stream. It then becomes necessary to use all more complex optical feed back methods to stabilize the gain. In this article, we propose a novel feed forward technique for stabilizing the output power or gain of an EDFA. A lithium niobate (LiNbO3)-based Mach,Zehnder interferometer is used to modulate the intensity of pump light in accordance with the input signal level so as to retain constant output power or gain. We observe that the gain or output power of EDFA can be stabilized by proper selection of the parameters of EDFA and modulator. © 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 2156,2160, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24554 [source]


A synergistic approach to protein crystallization: Combination of a fixed-arm carrier with surface entropy reduction

PROTEIN SCIENCE, Issue 5 2010
Andrea F. Moon
Abstract Protein crystallographers are often confronted with recalcitrant proteins not readily crystallizable, or which crystallize in problematic forms. A variety of techniques have been used to surmount such obstacles: crystallization using carrier proteins or antibody complexes, chemical modification, surface entropy reduction, proteolytic digestion, and additive screening. Here we present a synergistic approach for successful crystallization of proteins that do not form diffraction quality crystals using conventional methods. This approach combines favorable aspects of carrier-driven crystallization with surface entropy reduction. We have generated a series of maltose binding protein (MBP) fusion constructs containing different surface mutations designed to reduce surface entropy and encourage crystal lattice formation. The MBP advantageously increases protein expression and solubility, and provides a streamlined purification protocol. Using this technique, we have successfully solved the structures of three unrelated proteins that were previously unattainable. This crystallization technique represents a valuable rescue strategy for protein structure solution when conventional methods fail. [source]


The Future of a Discipline: Considering the ontological/methodological future of the anthropology of consciousness, Part I,

ANTHROPOLOGY OF CONSCIOUSNESS, Issue 1 2010
Toward a New Kind of Science, its Methods of Inquiry
ABSTRACT Calling for an expanded framework of EuroAmerican science's methodology whose perspective acknowledges both quantitative/etic and qualitative/emic orientations is the broad focus of this article. More specifically this article argues that our understanding of shamanic and/or other related states of consciousness has been greatly enhanced through ethnographic methods, yet in their present form these methods fail to provide the means to fully comprehend these states. They fail, or are limited, because this approach is only a "cognitive interpretation" or "metanarrative" of the actual experience and not the experience itself. Consequently this perspective is also limited because the researcher continues to assess his or her data through the lens of their symbolic constructs, thereby preventing them from truly experiencing shamanic and psi/spirit approaches to knowing since the data collection process does not "in and of itself" affect the observer. We, therefore, need expanded ethnographic methods that include within their approaches an understanding of methods and techniques to experientially encounter these states of consciousness,and become transformed by them. Our becoming transformed and then recollecting our ethnoautobiographical experiences is the means toward a new kind of science and its methods of inquiry that this article seeks to encourage. [source]