Practical Purposes (practical + purpose)

Distribution by Scientific Domains


Selected Abstracts


Shape reconstruction of an inverse boundary value problem of two-dimensional Navier,Stokes equations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2010
Wenjing Yan
Abstract This paper is concerned with the problem of the shape reconstruction of two-dimensional flows governed by the Navier,Stokes equations. Our objective is to derive a regularized Gauss,Newton method using the corresponding operator equation in which the unknown is the geometric domain. The theoretical foundation for the Gauss,Newton method is given by establishing the differentiability of the initial boundary value problem with respect to the boundary curve in the sense of a domain derivative. The numerical examples show that our theory is useful for practical purpose and the proposed algorithm is feasible. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Bodensystematik und Bodenklassifikation Teil I: Grundbegriffe

JOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 1 2005
Christoph Albrecht
Abstract Bodenordnungssysteme lassen sich meist nach zwei Prinzipien entwickeln: Entweder werden nur rein bodenkundliche Informationen als kategorisierendes Merkmal verwendet (pedogenetische Faktoren/Prozesse), oder die Kategorienbildung erfolgt problemorientiert anhand ausgewählter Parameter. Die meisten der weltweit verwendeten Bodenordnungssysteme lassen sich nach ihrer Grundausrichtung einem der beiden Typen zuordnen. Diese Betrachtungsweise ist nicht neu und wird in der Literatur mit unterschiedlichen Begriffen und Begriffsinhalten dargestellt. In der vorliegenden Arbeit werden die verschiedenen Definitionen von Systematik, Klassifikation, Taxonomie und Identifizierung zusammengefasst und geordnet. Dabei fällt auf, dass Begriffe mit sehr unterschiedlichen Inhalten oft synonym verwendet werden. Grundgedanke unserer Überlegungen ist die Trennung von Systematik, Klassifikation und Identifizierung. Systematik ist die grundsätzliche wissenschaftlich-deduktive Gliederung von Objekten in systematische Einheiten. Dabei soll das gesamte Wissen eines Fachgebietes in eine überschaubare Form gebracht werden, im Mittelpunkt stehen sowohl die umfassende Beschreibung einzelner Objekte als auch die Beziehungen zwischen den Objekten. Im Gegensatz dazu ist eine Klassifikation die zielorientiert-induktive Gliederung von Objekten. Die entstehenden Klassen werden nur anhand ausgewählter Parameter abgegrenzt, womit ein schneller Überblick bei speziellen Fragestellungen ermöglicht wird. Die Identifizierung ist die Einordnung von neuen Objekten in eine bestehende Systematik oder Klassifikation. Eine zweifelsfreie Identifizierung erfordert die Messbarkeit der kategorisierenden Merkmale. Bei einer genetisch angelegten Bodensystematik sind die Merkmale die Boden bildenden Prozesse und Faktoren. Da sie beim gegenwärtigen Kenntnisstand oft nicht messbar sind, bleiben Versuche, einen Boden in eine Systematik einzuordnen, häufig hypothetisch und dadurch subjektiv. Die Ergebnisse einer Bodensystematisierung sind daher oft anfechtbar, weil sie nicht durch Messwerte verifiziert werden können. Im Gegensatz dazu erlauben Bodenklassifikationen objektive Profilansprachen. Da jedoch die Festlegung der Grenzwerte eher pragmatisch nach Zweckmäßigkeit geschieht und nicht wissenschaftlich anhand von Prozessintensitäten, ist die Verwendung als grundlegendes Ordnungssystem eines Wissenschaftsgebietes nicht möglich. Die Bodenkunde benötigt beide Arten von Ordnungssystemen, um wissenschaftliche und praktische Ansprüche gleichermaßen erfüllen zu können, jedoch erfordern die Vollendung und Verifizierung der Systematik umfangreiche Forschungsarbeiten. Kurzfristig ist dieses Problem nur durch die Entwicklung einer kennwertbasierten Klassifikation lösbar, mit der die Kategorien der bestehenden Systematik so gut wie möglich nachgebildet werden. Langfristig ist die exakte Erforschung und Modellierung der Boden bildenden Prozesse aber unumgänglich. Soil systematics and classification systems Part I: Fundamentals Soil-ordering systems are primarily based and developed on one of two underlying principles: They are either categorized according to soil-forming processes, or the formation of categories develops by chosen parameters. This perspective has already been established in the literature, though it is often confusing as many terms are defined and applied differently. In this contribution, the various definitions of systematics, classification, taxonomy, and identification will be clearly differentiated and summarized. The core of our work is to clearly define and contrast three terms: systematics, classification, and identification. Systematics is the fundamental scientific and deductive ordering of objects into systematic units. The purpose of this approach is to organize the entire spectrum of knowledge within a discipline into a transparent and manageable form. Classification, in direct contrast to systematics, is goal-oriented and an inductive ordering of objects. Thus, the ordering scheme consists of classes which are clearly parameterized. Identification is the ordering of new objects into an already existing systematics or classification system. Close attention is paid to both the differences and the similarities between a systematics and a classification system, especially pertaining to their practical applications. The identification requires that the category-forming characteristics can be measured (e.g., for soil systematics, these are the soil-forming processes and factors). Currently, it is unfortunately not feasible to objectively quantify most soil-forming processes. Thus, most attempts at categorizing soils by systematics are hypothetical and highly subjective in nature. The resulting identification derived from the soil systematics approach is open to questions and contestable, since a graded measuring system does not yet exist to verify these determinations. In contrast, a soil-classification system does allow an objective soil-profile identification, although such systems are conceived pragmatically and designed for a practical purpose (e.g., not scientifically based on process intensities). Unfortunately, such a classification system cannot be applied as a universal scientific categorization system due to this method of conception. Both categorization approaches are required in soil science in order to satisfy both the practical and the scientific aspects of the field. However, substantial research must be done to complete and verify systematics. The only viable short-term solution is through the development of a graded classification system where the categories of the system are directly derived from the current systematics approach. In the long run both the exact investigation and the detailed modeling of the soil-forming processes are inevitable. [source]


Prevention of life-threatening infections due to encapsulated bacteria in children with hyposplenia or asplenia: a brief review of current recommendations for practical purposes

EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 5 2003
Elio Castagnola
Abstract: The aim of the present work was to summarise in a single paper all the options for prevention of life-threatening infections due to encapsulated bacteria in patients with hyposplenism or asplenia. Prevention of these infections should be obtained in all patients with 1) patient and family education, 2) prophylaxis by means of vaccination against Haemophilus influenzae and Streptococcus pneumoniae, 3) antibiotic prophylaxis, based primarily on penicillin, 4) delay of elective splenectomy or use methods of tissue salvage in splenic trauma. Vaccination is not effective against all serotypes of S. pneumoniae and Neisseria meningitidis causing life-threatening infections in hypo/asplenic patients. Moreover, antibacterial prophylaxis could select antibacterial-resistant pathogens and is highly conditioned by patient's compliance. Therefore, empirical antibacterial therapy of fever and/or suspected infection should be recommended to all splenectomised patients independently from time elapsing from splenectomy, vaccinal status and assumption of antibacterial prophylaxis. [source]


Experience in calibrating the double-hardening constitutive model Monot

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2003
M. A. Hicks
The Monot double-hardening soil model has previously been implemented within a general purpose finite element algorithm, and used in the analysis of numerous practical problems. This paper reviews experience gained in calibrating Monot to laboratory data and demonstrates how the calibration process may be simplified without detriment to the range of behaviours modelled. It describes Monot's principal features, important governing equations and various calibration methods, including strategies for overconsolidated, cemented and cohesive soils. Based on a critical review of over 30 previous Monot calibrations, for sands and other geomaterials, trends in parameter values have been identified, enabling parameters to be categorized according to their relative importance. It is shown that, for most practical purposes, a maximum of only 5 parameters is needed; for the remaining parameters, standard default values are suggested. Hence, the advanced stress,strain modelling offered by Monot is attainable with a similar number of parameters as would be needed for some simpler, less versatile, models. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A volume-of-fluid method for incompressible free surface flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2009
I. R. Park
Abstract This paper proposes a hybrid volume-of-fluid (VOF) level-set method for simulating incompressible two-phase flows. Motion of the free surface is represented by a VOF algorithm that uses high resolution differencing schemes to algebraically preserve both the sharpness of interface and the boundedness of volume fraction. The VOF method is specifically based on a simple order high resolution scheme lower than that of a comparable method, but still leading to a nearly equivalent order of accuracy. Retaining the mass conservation property, the hybrid algorithm couples the proposed VOF method with a level-set distancing algorithm in an implicit manner when the normal and the curvature of the interface need to be accurate for consideration of surface tension. For practical purposes, it is developed to be efficiently and easily extensible to three-dimensional applications with a minor implementation complexity. The accuracy and convergence properties of the method are verified through a wide range of tests: advection of rigid interfaces of different shapes, a three-dimensional air bubble's rising in viscous liquids, a two-dimensional dam-break, and a three-dimensional dam-break over an obstacle mounted on the bottom of a tank. The standard advection tests show that the volume advection algorithm is comparable in accuracy with geometric interface reconstruction algorithms of higher accuracy than other interface capturing-based methods found in the literature. The numerical results for the remainder of tests show a good agreement with other numerical solutions or available experimental data. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Performance of algebraic multi-grid solvers based on unsmoothed and smoothed aggregation schemes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2001
R. WebsterArticle first published online: 31 JUL 200
Abstract A comparison is made of the performance of two algebraic multi-grid (AMG0 and AMG1) solvers for the solution of discrete, coupled, elliptic field problems. In AMG0, the basis functions for each coarse grid/level approximation (CGA) are obtained directly by unsmoothed aggregation, an appropriate scaling being applied to each CGA to improve consistency. In AMG1 they are assembled using a smoothed aggregation with a constrained energy optimization method providing the smoothing. Although more costly, smoothed basis functions provide a better (more consistent) CGA. Thus, AMG1 might be viewed as a benchmark for the assessment of the simpler AMG0. Selected test problems for D'Arcy flow in pipe networks, Fick diffusion, plane strain elasticity and Navier,Stokes flow (in a Stokes approximation) are used in making the comparison. They are discretized on the basis of both structured and unstructured finite element meshes. The range of discrete equation sets covers both symmetric positive definite systems and systems that may be non-symmetric and/or indefinite. Both global and local mesh refinements to at least one order of resolving power are examined. Some of these include anisotropic refinements involving elements of large aspect ratio; in some hydrodynamics cases, the anisotropy is extreme, with aspect ratios exceeding two orders. As expected, AMG1 delivers typical multi-grid convergence rates, which for all practical purposes are independent of mesh bandwidth. AMG0 rates are slower. They may also be more discernibly mesh-dependent. However, for the range of mesh bandwidths examined, the overall cost effectiveness of the two solvers is remarkably similar when a full convergence to machine accuracy is demanded. Thus, the shorter solution times for AMG1 do not necessarily compensate for the extra time required for its costly grid generation. This depends on the severity of the problem and the demanded level of convergence. For problems requiring few iterations, where grid generation costs represent a significant penalty, AMG0 has the advantage. For problems requiring a large investment in iterations, AMG1 has the edge. However, for the toughest problems addressed (vector and coupled vector,scalar fields discretized exclusively using finite elements of extreme aspect ratio) AMG1 is more robust: AMG0 has failed on some of these tests. However, but for this deficiency AMG0 would be the preferred linear approximation solver for Navier,Stokes solution algorithms in view of its much lower grid generation costs. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Changes in professional conceptions of suicide prevention among psychologists: using a conceptual model

JOURNAL OF COMMUNITY & APPLIED SOCIAL PSYCHOLOGY, Issue 2 2002
Maila Upanne
Abstract This prospective follow-up study monitored the evolution of psychologists' conceptions of suicide prevention over the course of their participation in psychological autopsy studies that constituted the first phase of the National Suicide Prevention Project in Finland. Another purpose of the study was to consider the feasibility of an earlier suicide prevention model. Ideas on prevention were compared in two different situations and items were categorized using descriptive and conceptual criteria of prevention. They could be classified into a typology of four categories: care approach, cultural-educational approach, conditions approach, and critical approach. The follow-up suggested that the model is a feasible method for analysing conceptions of suicide prevention, and that it was possible to interpret conceptions in a theoretically adequate manner. In addition, ideas could be compared with certain known theoretical models of prevention. The model could thus be used in further research and for practical purposes. Experiences of psychological autopsy studies definitely had an impact on the psychologists' views; conceptions altered towards emphasizing the care approach and individual risk factors. Nonetheless, the overall structure of the prevention paradigm remained multifactorial, stressing multistage influencing. Surprisingly, the priority of acute suicide risk as a preventive target did not increase. Promotive aims remained the most important aim category. Copyright © 2002 John Wiley & Sons, Ltd. [source]


THE TEN COMMANDMENTS FOR OPTIMIZING VALUE-AT-RISK AND DAILY CAPITAL CHARGES

JOURNAL OF ECONOMIC SURVEYS, Issue 5 2009
Michael McAleer
Abstract Credit risk is the most important type of risk in terms of monetary value. Another key risk measure is market risk, which is concerned with stocks and bonds, and related financial derivatives, as well as exchange rates and interest rates. This paper is concerned with market risk management and monitoring under the Basel II Accord, and presents Ten Commandments for optimizing value-at-risk (VaR) and daily capital charges, based on choosing wisely from (1) conditional, stochastic and realized volatility; (2) symmetry, asymmetry and leverage; (3) dynamic correlations and dynamic covariances; (4) single index and portfolio models; (5) parametric, semi-parametric and non-parametric models; (6) estimation, simulation and calibration of parameters; (7) assumptions, regularity conditions and statistical properties; (8) accuracy in calculating moments and forecasts; (9) optimizing threshold violations and economic benefits; and (10) optimizing private and public benefits of risk management. For practical purposes, it is found that the Basel II Accord would seem to encourage excessive risk taking at the expense of providing accurate measures and forecasts of risk and VaR. [source]


MARGINAL COMMODITY TAX REFORMS: A SURVEY

JOURNAL OF ECONOMIC SURVEYS, Issue 4 2007
Alessandro Santoro
Abstract As noted 30 years ago by Martin Feldstein, optimal taxes may be useless for practical purposes and emphasis should instead be placed on the possibility of enhancing welfare by reforming existing tax rates. In this perspective, marginal commodity tax reforms are gaining increasing attention due to political and economic constraints on large reforms of direct (or indirect) taxation. In this paper, we summarize the main features and results of the literature on marginal commodity tax reforms pioneered by Ahmad and Stern, further developed by Yitzhaki and Thirsk and recently reinterpreted by Makdissi and Wodon. We establish new links to other fields of research, namely the literature on the use of equivalence scales and on poverty measurement. We also critically examine some issues associated with the implementation of marginal tax reforms with special reference to the calculation of welfare weights and revenue effects. Finally, we suggest directions for future research on poverty-reducing commodity tax reforms. [source]


A Descriptive Model for the Kinetics of Gas Release in Different Types of Pizzas

JOURNAL OF FOOD SCIENCE, Issue 3 2003
M.L. Cabo
ABSTRACT: A model based on typical equations of microbial kinetics is proposed to describe gas release (GR) in ham, tuna, and meat pizzas packaged under different CO2 -enriched atmospheres: 20% CO2, 70% CO2, and 70% CO2 plus 500 mg/kg Nisaplin. CO2 -enriched atmospheres hardly influenced LAB growth but reduced GR, which points to the importance of yeasts for GR. Nonetheless, LAB also contributed significantly to GR, so Nisaplin also delayed GR. Gas release followed a diauxic pattern, the 2nd stage being presumably related to yeasts shifting from respiratory to fermentative metabolism once oxygen was depleted. However, storing pizzas for longer than 25 to 30 d does not seem appropriate in terms of shelf life, so the model proposed appears adequate for practical purposes. [source]


Experts, Juries, and Witch-hunts: From Fitzjames Stephen to Angela Cannings

JOURNAL OF LAW AND SOCIETY, Issue 3 2004
Tony Ward
Angela Cannings's successful appeal against her convictions for murder has revived an old controversy about the competence of juries to evaluate expert evidence. In response to criticisms of the jury system in the wake of a series of controversial poisoning trials, the Victorian jurist J.F. Stephen argued that juries were well equipped to decide on behalf of the community which experts should be treated as authorities, whose opinions the lay public could accept for practical purposes as ,beyond reasonable doubt'. Such practical decisions did not, Stephen argued, require that juries fully understand the experts' reasons for their conclusions. This article draws on recent work in social epistemology to argue that Stephen's view of the jury remains tenable, and that his authoritarian arguments can be recast in more democratic terms. It also concurs in Stephen's blunt recognition that the courts' need to make decisions despite the uncertainties of science renders some convictions of the innocent inevitable. [source]


CONSUMER PREFERENCES FOR VISUALLY PRESENTED MEALS

JOURNAL OF SENSORY STUDIES, Issue 2 2009
HANS HENRIK REISFELT
ABSTRACT The aim of the study was to investigate consumers' preferences for variations of a visually presented meal. The study was conducted in three middle-sized Danish towns, including 768 respondents who were presented with a computerized questionnaire that initially displayed four consecutive series of photos. The series each consisted of eight unique photos of randomized food dishes arranged around the center square in a 3 × 3 array. Five meal components, each with two levels, were investigated. One level of each component was used for each photo, in total 25 = 32 combinations. The respondents were asked to select the meal they preferred the most, the second most and the least, respectively. Significant interactions were found between meal components and background variables such as, gender, age, geographic variables, purchase store and level of education. The current procedure can be applied to help solve a number of problems involving consumer choices. PRACTICAL APPLICATIONS This study outlines an approach to use visual images for investigations of food. Our results suggest that rather complex food stimuli of great similarity can be used to subdivide consumers based on sociodemographic background variables. We present an efficient and cheap quick method that provides and captures more information than an ordinary survey that focuses merely on the most preferred option. As a prerequisite for success, stimuli should be well known and appropriately selected. Hence, the present quick method can easily be applied for several practical purposes, such as pretesting, labeling, product flop prevention, and for specific optimization and selection tasks, e.g., convenience meals and institutional meal services in various contexts. The conjoint layout used allows for late-based segmentation. It further allows for estimation on aggregate as well as individual level. The current approach is useful for database and/or online implementation. [source]


Non-parametric confidence bands in deconvolution density estimation

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2007
Nicolai Bissantz
Summary., Uniform confidence bands for densities f via non-parametric kernel estimates were first constructed by Bickel and Rosenblatt. In this paper this is extended to confidence bands in the deconvolution problem g=f*, for an ordinary smooth error density ,. Under certain regularity conditions, we obtain asymptotic uniform confidence bands based on the asymptotic distribution of the maximal deviation (L, -distance) between a deconvolution kernel estimator and f. Further consistency of the simple non-parametric bootstrap is proved. For our theoretical developments the bias is simply corrected by choosing an undersmoothing bandwidth. For practical purposes we propose a new data-driven bandwidth selector that is based on heuristic arguments, which aims at minimizing the L, -distance between and f. Although not constructed explicitly to undersmooth the estimator, a simulation study reveals that the bandwidth selector suggested performs well in finite samples, in terms of both area and coverage probability of the resulting confidence bands. Finally the methodology is applied to measurements of the metallicity of local F and G dwarf stars. Our results confirm the ,G dwarf problem', i.e. the lack of metal poor G dwarfs relative to predictions from ,closed box models' of stellar formation. [source]


Is the Addition-Fragmentation Step of the RAFT Polymerisation Process Chain Length Dependent?,

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 5 2006
Ekaterina I. Izgorodina
Abstract Summary: The chain length dependence of the addition-fragmentation equilibrium constant (K) for cumyl dithiobenzoate (CDB) mediated polymerisation of styrene has been studied via high level ab initio molecular orbital calculations. The results indicate that chain length and penultimate unit effects are extremely important during the early stages of the polymerisation process. In the case of the attacking radical (i.e., R, in: R,,+,SC(Z)SR,,,,RSC,(Z)SR,), the equilibrium constant varies by over three orders of magnitude on extending R, from the styryl unimer to the trimer species and actually increases with chain length, further confirming that K is high in this system. When the reactions of the cumyl leaving group and cyanoisopropyl initiating species, which are also present in CDB-mediated polymerisation of styrene in the presence of the initiator 2,2,-azoisobutyronitrile, are also included, the variation in K extends over five orders of magnitude. Although less significant, the influence of the R, group should also be taken into account in a complete kinetic model of the RAFT process. However, for most practical purposes, its chain length effects beyond the unimer stage may be ignored. These results indicate that current simplified models of the RAFT process, which typically ignore all chain length effects in the R and R, positions, and all substituent effects in the R, position, may be inadequate, particularly in modelling the initial stages of the process. [source]


The Inheritance of Chilling Tolerance in Tomato (Lycopersicon spp.)

PLANT BIOLOGY, Issue 2 2005
J. H. Venema
Abstract: During the past 25 years, chilling tolerance of the cultivated (chilling-sensitive) tomato Lycopersicon esculentum and its wild, chilling-tolerant relatives L. peruvianum and L. hirsutum (and, less intensively studied, L. chilense) has been the object of several investigations. The final aim of these studies can be seen in the increase in chilling tolerance of the cultivated genotypes. In this review, we will focus on low-temperature effects on photosynthesis and the inheritance of these traits to the offspring of various breeding attempts. While crossing L. peruvianum (,) to L. esculentum (,) so far has brought the most detailed insight with respect to physiological questions, for practical purposes, e.g., the readily cross ability, crossing programmes with L. hirsutum as pollen donor at present seem to be a promising way to achieve higher chilling-tolerant genotypes of the cultivated tomato. This perspective is due to the progress that has been made with respect to the genetic basis of chilling tolerance of Lycopersicon spp. over the past five years. [source]


A Holistic Simulation Approach from a Measured Load to Element Stress Using Combined Multi-body Simulation and Finite Element Modelling

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2009
Matthias Harter
The design of vehicle bodies requires the knowledge of the vehicle's structural response to external loads and disturbances. In rigid multi-body simulation the dynamic behaviour of complex systems is calculated with rigid bodies and neglect of body elasticity. On the other hand, in finite element models large degree of freedom numbers are used to represent the elastic properties of a single body. Both simulation methods can be combined, if the finite element model size is reduced to a degree of freedom number feasible to multi-body simulation. The application to practical purposes requires the use and interconnection of several different software tools. In this contribution a holistic method is presented, which starts with the measurement or synthesis of loads and excitations, continues with the integration of a reduced finite element model into a multi-body system, the dynamic response calculation of this combined model, and concludes with the result expansion to the full finite element model for calculating strain and stress values at any point of the finite element mesh. The applied software tools are Simpack, Nastran, and Matlab. An example is given with a railway vehicle simulated on measured track geometry. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Log-rank permutation tests for trend: saddlepoint p -values and survival rate confidence intervals

THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2009
Ehab F. Abd-Elfattah
MSC 2000: Primary 62N03; secondary 62N02 Abstract Suppose p,+,1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log-rank class. This article uses saddlepoint methods to determine the mid- P -values for such permutation tests for any test statistic in the weighted log-rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid- P -values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid- P -value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5-16; 2009 © 2009 Statistical Society of Canada Supposons que p,+,1 groupes expérimentaux sont associés à un dosage croissant d'un traitement et que tous les groupes sont sujets à une censure à droite. Dans de tels cas, des tests de permutations pour la tendance peuvent être faits en se basant sur des statistiques obtenues à partir de la classe des statistiques de log-rangs pondérés. Cet article utilise l'approximation par le point de selle pour obtenir le seuil moyen de ces tests de permutation quelle que soit la statistique de test appartenant à la classe des statistiques de log-rangs pondérés. La simulation des permutations est remplacée par le calcul analytique du point de selle. Ce dernier procure un seuil moyen très précis qui peut être considéré exact pour la majorité des applications et qui est toujours plus précis que les approximations normales. La vitesse de calcul des seuils moyens permet l'inversion de ces tests afin de déterminer des intervalles de confiance pour le pourcentage d'augmentation moyen (ou médian) du temps de survie par unité de dosage supplémentaire. La revue canadienne de statistique 37: 5-16; 2009 © 2009 Société statistique du Canada [source]


Exact Confidence Bounds Following Adaptive Group Sequential Tests

BIOMETRICS, Issue 2 2009
Werner Brannath
Summary We provide a method for obtaining confidence intervals, point estimates, and p-values for the primary effect size parameter at the end of a two-arm group sequential clinical trial in which adaptive changes have been implemented along the way. The method is based on applying the adaptive hypothesis testing procedure of Müller and Schäfer (2001, Biometrics57, 886,891) to a sequence of dual tests derived from the stage-wise adjusted confidence interval of Tsiatis, Rosner, and Mehta (1984, Biometrics40, 797,803). In the nonadaptive setting this confidence interval is known to provide exact coverage. In the adaptive setting exact coverage is guaranteed provided the adaptation takes place at the penultimate stage. In general, however, all that can be claimed theoretically is that the coverage is guaranteed to be conservative. Nevertheless, extensive simulation experiments, supported by an empirical characterization of the conditional error function, demonstrate convincingly that for all practical purposes the coverage is exact and the point estimate is median unbiased. No procedure has previously been available for producing confidence intervals and point estimates with these desirable properties in an adaptive group sequential setting. The methodology is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease. [source]


Flow Structures of a Liquid Film Falling on Horizontal Tubes

CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 6 2005
J. Mitrovic
Abstract Patterns of a liquid film falling across a vertical array of horizontal tubes change from droplet mode at low flow rates to liquid sheet at high flow rates. Between these limits, liquid columns form as a further stable flow pattern. The transition from one flow mode to another occurs via unstable structures consisting simultaneously of droplets and columns or of merging columns. The boundaries of the flow modes can be obtained from relationships expressing the flow rate as a function of physical properties, that is, the Reynolds number as a function of the Kapitza number. Correlations for the pattern boundaries recommended in the literature are compared with each other and found to be in acceptable agreement for practical purposes. [source]


Development of chromatic adaptation transforms and concept for their classification

COLOR RESEARCH & APPLICATION, Issue 3 2006
Yoshinobu NayataniArticle first published online: 7 APR 200
Abstract Three types of international recommendations are necessary on CATs (chromatic adaptation transforms). CAT-Type I and CAT-Type II are for general use on chromatic adaptation studies. The former is related to chromatic adaptation theory and the latter to performance on field trial data. In addition, CAT-Type III is necessary for a specific and practical purposes. The need for classifying to CAT-Type I and CAT-Type II is found from a careful inspection of the development process from Nayatani et al. transform to BFD transform, referring to the Ph. D. thesis by Lam (University of Bradford, 1985). The process clearly shows two types of flows on the development of various CATs. One is the flow for deepening the theory of chromatic adaptation (CAT-Type I), and the other is for giving good performance to existing field trial data and also ease of use (CAT-Type II). Additional CAT-Type III is, for example, CAT recommended in CIE TC 8-04 technical report. The CAT is only applicable to compare hardcopy and softcopy images for the specified observing conditions in the report. Still, a difficult problem, determination of corresponding colors, remains in the method of subjective estimation, which is useful and widely used for estimating chromatic adaptation effects experimentally. © 2006 Wiley Periodicals, Inc. Col Res Appl, 31, 205,217, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20210 [source]