By Step (by + step)

Distribution by Scientific Domains

Kinds of By Step

  • step by step

  • Selected Abstracts

    Teaching image processing: A two-step process

    Clarence Han-Wei Yapp
    Abstract An interactive program for teaching digital image processing techniques is presented in this article. Instead of heavy programming tasks and mathematical functions, students are led step by step through the exercises and then allowed to experiment. This article evaluates the proposed program and compares it with existing techniques. © 2008 Wiley Periodicals, Inc. Comput Appl Eng Educ 16: 211,222, 2008; Published online in Wiley InterScience (; DOI 10.1002/cae.20149 [source]

    Casualty occurrence mechanism in the collapse of timber-frame houses during an earthquake

    Junji Kiyono
    Abstract The collapse of timber-frame houses during an earthquake was analyzed by the 2-dimensional (2D) and 3-dimensional (3D) distinct element methods (DEM). The DEM is a numerical analysis technique in which positions of elements are calculated by solving equations of motion step by step. Both individual and group behavior can be simulated. The structure is modeled as an assembly of distinct elements connected by virtual springs and dashpots where elements come into contact. A timber-frame house with simple structural elements; beams, columns, floors, and a roof, was modeled. Injury to human bodies also was considered. Human bodies modeled as circles (2D) or rectangular parallelepipeds (3D) were placed on its floors. The maximum impact acceleration on the human body during an earthquake was calculated. Injury to humans in houses was assessed by the Chest-G index and Head Injury Criteria (HIC) widely used in automobile engineering. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Greek Monetary Economics in Retrospect: The Adventures of the Drachma

    ECONOMIC NOTES, Issue 3 2005
    Sophia Lazaretou
    This paper enumerates the adventures of the drachma step by step, dividing its story into seven parts. Specifically, its main purpose is to present some historical perspective on the behaviour of the monetary and fiscal policies pursued in Greece during the period from the early 1830s until the introduction of the euro. For Greece, the lessons of historical experience are very important. Since the formation of the modern Greek state, government officials have striven , sometimes making hard efforts , to keep abreast of international monetary developments. This was because they understood that the participation of a peripheral, poor and inflation-prone country with a weak currency and an underdeveloped money market, like Greece of the time, in a monetary club of powerful economies could improve her international credit standing and imply important benefits in terms of exchange rate and monetary stability, and long-term foreign borrowing. [source]

    Trace eyeblink conditioning in decerebrate guinea pigs

    Sadaharu Kotani
    Abstract We investigated the trace eyeblink conditioning in decerebrate guinea pigs to elucidate the possible role of the cerebellum and brainstem in this hippocampus-dependent task. A 350-ms tone conditioned stimulus was paired with a 100-ms periorbital shock unconditioned stimulus with a trace interval of either 0, 100, 250 or 500 ms. Decerebrate animals readily acquired the conditioned response with a trace interval of 0 or 100 ms. Even in the paradigm with a 500-ms trace interval, which is known to depend critically on the hippocampus in all animal species examined, the decerebrate guinea pigs acquired the conditioned response, which had adaptive timing as well as in the other paradigms with a shorter trace interval. However, it took many more trials to learn in the 500-ms trace paradigm than in the shorter trace interval paradigms, and the conditioned response expression was unstable from trial to trial. When decerebrate animals were conditioned step by step with a trace interval of 100, 250 and 500 ms, sequentially, they easily acquired the adaptive conditioned response to a 500-ms trace interval. However, the frequency of conditioned responses decreased after the trace interval was shifted from 250 ms to 500 ms, which was not observed after the shift from 100 ms to 250 ms. These results suggest that the cerebellum and brainstem could maintain the ,trace' of the conditioned stimulus and associate it with the unconditioned stimulus even in the 500-ms trace paradigm, but that the forebrain might be required for facilitating and stabilizing the association. [source]

    Finite-element analysis of a combined fine-blanking and extrusion process

    P. F. Zheng
    Abstract This paper presents the characteristics of the combined fine-blanking and extrusion process and gives a detailed analysis of the process with the finite-element method. To carry out the simulation step by step and avoid the tendency to diverge in the calculations, the remeshing, tracing and golden section methods were developed and introduced into the finite-element program. Different boundary conditions were used in the simulation; the mesh distortion, field of material flow, and the stress and strain distributions were obtained. From the simulated results, the deformation characteristics under different boundary conditions were revealed. An experiment was also carried out to verify the simulated results. A large strain analysis technique was chosen to determine the effective strain distribution based on the experiment. The effective strain distributions from the simulation are in accordance with the effective strain distributions and the hardness distributions from the experiment. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Performance of finite volume solutions to the shallow water equations with shock-capturing schemes

    K. S. Erduran
    Abstract Numerical methods have become well established as tools for solving problems in hydraulic engineering. In recent years the finite volume method (FVM) with shock capturing capabilities has come to the fore because of its suitability for modelling a variety of types of flow; subcritical and supercritical; steady and unsteady; continuous and discontinuous and its ability to handle complex topography easily. This paper is an assessment and comparison of the performance of finite volume solutions to the shallow water equations with the Riemann solvers; the Osher, HLL, HLLC, flux difference splitting (Roe) and flux vector splitting. In this paper implementation of the FVM including the Riemann solvers, slope limiters and methods used for achieving second order accuracy are described explicitly step by step. The performance of the numerical methods has been investigated by applying them to a number of examples from the literature, providing both comparison of the schemes with each other and with published results. The assessment of each method is based on five criteria; ease of implementation, accuracy, applicability, numerical stability and simulation time. Finally, results, discussion, conclusions and recommendations for further work are presented. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Polynomial control: past, present, and future

    Vladimír Ku
    Abstract Polynomial techniques have made important contributions to systems and control theory. Engineers in industry often find polynomial and frequency domain methods easier to use than state equation-based techniques. Control theorists show that results obtained in isolation using either approach are in fact closely related. Polynomial system description provides input,output models for linear systems with rational transfer functions. These models display two important system properties, namely poles and zeros, in a transparent manner. A performance specification in terms of polynomials is natural in many situations; see pole allocation techniques. A specific control system design technique, called polynomial equation approach, was developed in the 1960s and 1970s. The distinguishing feature of this technique is a reduction of controller synthesis to a solution of linear polynomial equations of a specific (Diophantine or Bézout) type. In most cases, control systems are designed to be stable and meet additional specifications, such as optimality and robustness. It is therefore natural to design the systems step by step: stabilization first, then the additional specifications each at a time. For this it is obviously necessary to have any and all solutions of the current step available before proceeding any further. This motivates the need for a parametrization of all controllers that stabilize a given plant. In fact this result has become a key tool for the sequential design paradigm. The additional specifications are met by selecting an appropriate parameter. This is simple, systematic, and transparent. However, the strategy suffers from an excessive grow of the controller order. This article is a guided tour through the polynomial control system design. The origins of the parametrization of stabilizing controllers, called Youla,Ku,era parametrization, are explained. Standard results on reference tracking, disturbance elimination, pole placement, deadbeat control, H2 control, l1 control and robust stabilization are summarized. New and exciting applications of the Youla,Ku,era parametrization are then discussed: stabilization subject to input constraints, output overshoot reduction, and fixed-order stabilizing controller design. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Survey of quantitative feedback theory (QFT),

    Isaac Horowitz
    QFT is an engineering design theory devoted to the practical design of feedback control systems. The foundation of QFT is that feedback is needed in control only when plant (P), parameter and/or disturbance (D) uncertainties (sets ,,={P}, ,,={D}) exceed the acceptable (A) system performance uncertainty (set ,,={A}). The principal properties of QFT are as follows. (1) The amount of feedback needed is tuned to the (,,, ,,, ,,) sets. If ,, ,exceeds' (,,, ,,), feedback is not needed at all. (2) The simplest modelling is used: (a) command, disturbance and sensor noise inputs, and (b) the available sensing points and the defined outputs. No special controllability test is needed in either linear or non-linear plants. It is inherent in the design procedure. There is no observability problem because uncertainty is included. The number of independent sensors determines the number of independent loop transmissions (Li), the functions which provide the benefits of feedback. (3) The simplest mathematical tools have been found most use ful,primarily frequency response. The uncertainties are expressed as sets in the complex plane. The need for the larger ,,, ,, sets to be squeezed into the smaller ,, set results in bounds on the Li(j,) in the complex plane. In the more complex systems a key problem is the division of the ,feedback burden' among the available Li(j,). Point-by-point frequency synthesis tremendously simplifies this problem. This is also true for highly uncertain non-linear and time-varying plants which are converted into rigorously equivalent linear time invariant plant sets and/or disturbance sets with respect to the acceptable output set ,,. Fixed point theory justifies the equivalence. (4) Design trade-offs are highly transparent in the frequency domain: between design complexity and cost of feedback (primarily bandwidth), sensor noise levels, plant saturation levels, number of sensors needed, relative sizes of ,,, ,, and cost of feedback. The designer sees the trade-offs between these factors as he proceeds and can decide according to their relative importance in his particular situation. QFT design techniques with these properties have been developed step by step for: (i) highly uncertain linear time invariant (LTI) SISO single- and multiple-loop systems, MIMO single-loop matrix and multiple-loop matrix systems; and (ii) non-linear and time-varying SISO and MIMO plants, and to a more limited extent for plants with distributed control inputs and sensors. QFT has also been developed for single- and multiple-loop dithered non-linear (adaptive) systems with LTI plants, and for a special class (FORE) of non-linear compensation. New techniques have been found for handling non-minimum-phase (NMP) MIMO plants, plants with both zeros and poles in the right half-plane and LTI plants with incidental hard non-linearities such as saturation. [source]

    Movable Peace: Engaging the Transnational in Cambodia's Dhammayietra

    Kathryn Poethig
    The Dhammayietra is an annual peace walk in Cambodia that originated at the historic repatriation of refugees in the Thai border camps at the U.N.-monitored transition to democracy in 1992. It situates itself within the discourse and practice of "socially engaged Buddhism" that has gained visibility in Asia and American Buddhism during the last two decades. As Cambodia's particular form of socially engaged Buddhism is marked by refugee return, I will argue that the Dhammayietra's revival of Buddhism in postsocialist Cambodia is only possible because of its transnational formation. Represented as a quintessential Khmer Buddhist response to Cambodia's entrenched conflicts, the networks forged beyond the border of Cambodia have been instrumental in fashioning the face of the Dhammayietra. Though it forges its discursive identity vis a vis the "local" space of the nation, this local space is mobile. Maha Ghosananda's instruction to move "step by step" toward peace reappropriates dangerous mobility,the massive relocations during the Khmer Rouge era, refugee flight, the danger of treading on land fed with mines,and turns walking into a religious act. It is this discursive "move" that loosens the Dhammayietra's ties to the nation and allows it to slip across political and religious borders and ally itself with a diverse network of interfaith peace groups that are its transnational public forum. [source]

    An executive's guide to SOX audits

    Jack W. Paul
    Auditors aren't the only ones who need a sharp understanding of internal control audits. Savvy CEOs and other executives also need this knowledge,to save time, money, and even professional careers. Because, as most readers know, CEOs and CFOs must now certify the veracity of their company's financial statements,under threat of heavy criminal penalties. So, how thorough is your knowledge about internal control audits? Let the author of this article lead you step by step through the process, as he offers advice on avoiding pitfalls and points out possible warning signs. © 2010 Wiley Periodicals, Inc. [source]

    Adjusted Scaling of FDG Positron Emission Tomography Images for Statistical Evaluation in Patients With Suspected Alzheimer's Disease

    Ralph Buchert PhD
    ABSTRACT Background and Purpose. Statistical parametric mapping (SPM) gained increasing acceptance for the voxel-based statistical evaluation of brain positron emission tomography (PET) with the glucose analog 2-[18F]-fluoro-2-deoxy-d-glucose (FDG) in patients with suspected Alzheimer's disease (AD). To increase the sensitivity for detection of local changes, individual differences of total brain FDG uptake are usually compensated for by proportional scaling. However, in cases of extensive hypometabolic areas, proportional scaling overestimates scaled uptake. This may cause significant underestimation of the extent of hypometabolic areas by the statistical test. Methods. To detect this problem, the authors tested for hyper metabolism. In patients with no visual evidence of true focal hypermetabolism, significant clusters of hypermetabolism in the presence of extended hypometabolism were interpreted as false-positive findings, indicating relevant overestimation of scaled uptake. In this case, scaled uptake was reduced step by step until there were no more significant clusters of hypermetabolism. Results. In 22 consecutive patients with suspected AD, proportional scaling resulted in relevant overestimation of scaled uptake in 9 patients. Scaled uptake had to be reduced by 11.1%± 5.3% in these cases to eliminate the artifacts. Adjusted scaling resulted in extension of existing and appearance of new clusters of hypometabolism. Total volume of the additional voxels with significant hypometabolism depended linearly on the extent of the additional scaling and was 202 ± 118 mL on average. Conclusions. Adjusted scaling helps to identify characteristic metabolic patterns in patients with suspected AD. It is expected to increase specificity of FDGPET in this group of patients. [source]

    Comparative and functional morphology of the buccal cavity of Diplogastrina (Nematoda) and a first outline of the phylogeny of this taxon*

    A. Fürst Von Lieven
    The Diplogastrina include about 290 species of free living nematodes. Traditional classifications of this taxon are not based upon hypotheses of phylogenetic relationships. The highly variable structures of the buccal cavity were examined in 21 species using light microscopy and SEM. The function of the stomatal structures was studied with the aid of video recordings of living worms. The morphological data were used to reconstruct a first outline of the phylogenetic relationships of the Dipolgastrina. A rhabditoid gymnostomatal tube which is longer than wide, a short stegostom and a small dorsal tooth as in Pseudodiplogasteroides belong to the stem species pattern of Diplogastrina. Diplogastrina with a ,Rhabditis'-like gymnostomatal tube feed on bacteria and small fungal spores. A short and broad gymnostom as well as a right subventral tooth which forms a functional unit with the dorsal tooth were acquired step by step in the ancestral line leading to Mononchoides and Tylopharynx. The cuticularized cheilostom was divided into six plates connected by pliable regions twice independently within the Diplogastrina. The teeth-bearing posterior part of the buccal capsule can move forewards by pushing apart the plates of the cheilostom so that the teeth can get in contact with food items that are too big to be sucked into the buccal cavity. Diplogastrina with a divided cheilostom can feed not only on bacteria, but also on larger fungal spores, ciliates or other nematodes. Tylopharynx is specialized to rip apart the cell wall of fungal hyphae with the movements of a dorsal and a subventral tooth in order to suck out the contents of the fungus. This shows that the transformation of the buccal cavity in Diplogastrina is linked with an expansion of ecological niches. [source]

    Random Computer Generation of 3D Molecular Structures: Theoretical and Statistical Analysis

    Alain Porquet
    Abstract Summary: A computer program has been developed to generate three-dimensional molecular structures randomly from a given collection of elementary chemical functional groups: the so-called fragment database. The gradual assembly of the various fragments present in the database is performed according to a "self-generation algorithm" (SGA). The latter is based on the covalent binding, step by step, between the unoccupied electronic valencies associated with the fragments of the database, and those of the growing molecular structure. When the number of electronic valencies of the molecular structure is zero, the growth process for this particular molecule is completed. It is shown that SGA is able to reproduce the asymmetric mass distributions of some natural colloids, like humic substances. In this article, particular attention is given to the analysis of the relationship existing between the fragment composition of the database and that of the collection of molecules generated. Mathematical expressions are derived and discussed, to understand the relationship between the proportions of the different types of fragments and the final composition of the generated molecular ensembles. For that purpose, a "pathway" formalism is proposed to describe exhaustively the whole set of generated molecules by specifying the distribution function of all of the fragments therein integrated. A statistical analysis that satisfactorily reproduces the predictions of the pathway model is extensively discussed. Example of a three-dimensional structure obtained with the "self-generation algorithm" (SGA). [source]

    Global regular solutions to Cahn,Hilliard system coupled with viscoelasticity

    Irena Paw
    Abstract In this paper we prove the existence and uniqueness of a global in time, regular solution to the Cahn,Hilliard system coupled with viscoelasticity. The system arises as a model, regularized by a viscous damping, of phase separation process in a binary deformable alloy quenched below a critical temperature. The key tools in the analysis are estimates of absorbing type with the property of exponentially time-decreasing contribution of the initial data. Such estimates allow not only to prolong the solution step by step on the infinite time interval but also to conclude the existence of an absorbing set. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Positioning the learning asset portfolio as a key component in an organization's enterprise risk management strategy

    Peter J. McAliney
    This article presents a process for valuing a portfolio of learning assets used by line executives across industries to value traditional business assets. Embedded within the context of enterprise risk management, this strategic asset allocation process is presented step by step, providing readers the operational considerations to implement this program within their organization to enhance performance improvement. At the individual initiative level, readers will recognize elements used in developing retrospective return on investments (ROIs) for learning programs. [source]

    Applicability of electrical resistivity tomography monitoring to coarse blocky and ice-rich permafrost landforms

    C. Hilbich
    Abstract The inversion and interpretation of electrical resistivity tomography (ERT) data from coarse blocky and ice-rich permafrost sites are challenging due to strong resistivity contrasts and high contact resistances. To assess temporal changes during ERT monitoring (ERTM), corresponding inversion artefacts have to be separated from true subsurface changes. Appraisal techniques serve to analyse an ERTM data set from a rockglacier, including synthetic modelling, the depth of investigation index technique and the so-called resolution matrix approach. The application of these methods led step by step to the identification of unreliable model regions and thus to the improvement in interpretation of temporal resistivity changes. An important result is that resistivity values of model regions with strong resistivity contrasts and highly resistive features are generally of critical reliability, and resistivity changes within or below the ice core of a rockglacier should therefore not be interpreted as a permafrost signal. Conversely, long-term degradation phenomena in terms of warming of massive ground ice at the permafrost table are detectable by ERTM. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    FE simulation of InGaN QD formation at the edge of threading dislocation in GaN

    Abstract The stress induced diffusion process of In-Ga segregation in InxGa1,xN layer deposited on GaN is simulated step by step by using a 3D nonlinear FE method. From the thermodynamical point of view this process is governed by the driving force induced by the gradient of residual stresses operating in an anisotropic nonlinear elastic structure. The source of stresses we consider is the set of threading dislocations examined in the plane view HRTEM investigation of GaN layer deposited on sapphire. [source]

    Protein composition of oil bodies from mature Brassica napus seeds

    Pascale Jolivet
    Abstract Seed oil bodies (OBs) are intracellular particles storing lipids as food or biofuel reserves in oleaginous plants. Since Brassica napus OBs could be easily contaminated with protein bodies and/or myrosin cells, they must be purified step by step using floatation technique in order to remove non-specifically trapped proteins. An exhaustive description of the protein composition of rapeseed OBs from two double-zero varieties was achieved by a combination of proteomic and genomic tools. Genomic analysis led to the identification of sequences coding for major seed oil body proteins, including 19 oleosins, 5 steroleosins and 9 caleosins. Most of these proteins were also identified through proteomic analysis and displayed a high level of sequence conservation with their Arabidopsis thaliana counterparts. Two rapeseed oleosin orthologs appeared acetylated on their N-terminal alanine residue and both caleosins and steroleosins displayed a low level of phosphorylation. [source]

    Divide et impera: optimizing compartmental models of neurons step by step

    Arnd Roth
    No abstract is available for this article. [source]

    Cytokines, implantation and early abortion: re-examining the Th1/Th2 paradigm leads to question the single pathway, single therapy concept

    Gérard Chaouat
    Problem: Human in vitro fertilization (IVF) embryo transfer is accompanied by a low implantation rate even after a very successful IVF, and there are a certain number of ,idiopathic sterilities' which are due to repeated implantation failures. In the very same vein, the question of improving implantation rates is of prime importance in agricultural research to improve the management of livestock. Pre-implantation prenatal diagnosis cannot be accomplished in individuals who have a high rate of implantation failure, whether women undergoing IVF, or animals, during genetic cloning. Implantation cytokine networks need to be known in such a perspective. Methods: We review the evolution and theories in reproductive immunology, briefly deal with the complexity of implantation as a step by step developmental event, and then present some of our recent data in mice and human. Conclusions: We conclude that the T helper cell type 1/2 (Th1/Th2) paradigm, as useful as it has been to explain pregnancy, is no longer sufficient in view of the emerging complexity of the cytokine network at the materno-fetal interface. This is peculiarly true for implantation, which, as a step by step developmentally regulated process, involving inflammatory molecules, cannot fit into such a scheme. [source]

    Gene activation cascade triggered by a single photoperiodic cycle inducing flowering in Sinapis alba

    THE PLANT JOURNAL, Issue 6 2009
    Maria D'Aloia
    Summary Molecular genetic analyses in Arabidopsis disclosed a genetic pathway whereby flowering is induced by the photoperiod. This cascade is examined here within the time course of floral transition in the long-day (LD) plant Sinapis alba induced by a single photoperiodic cycle. In addition to previously available sequences, the cloning of CONSTANS (SaCO) and FLOWERING LOCUS T (SaFT) homologues allowed expression analyses to be performed to follow the flowering process step by step. A diurnal rhythm in SaCO expression in the leaves was observed and transcripts of SaFT were detected when light was given in phase with SaCO kinetics only. This occurred when day length was extended or when a short day was shifted towards a ,photophile phase'. The steady-state level of SaFT transcripts in the various physiological situations examined was found to correlate like a rheostat with floral induction strength. Kinetics of SaFT activation were also consistent with previous estimations of translocation of florigen out of leaves, which could actually occur after the inductive cycle. In response to one 22-h LD, initiation of floral meristems by the shoot apical meristem (SAM) started about 2 days after activation of SaFT and was marked by expression of APETALA1 (SaAP1). Meanwhile, LEAFY (SaLFY) was first up-regulated in leaf primordia and in the SAM. FRUITFULL (SaFUL) was later activated in the whole SAM but excluded from floral meristems. These patterns are integrated with previous observations concerning upregulation of SUPPRESSOR OF OVEREXPRESSION OF CO1 (SaSOC1) to provide a temporal and spatial map of floral transition in Sinapis. [source]

    Controller design based on similar skew-symmetric structure for nonlinear plants

    Yicheng Liu
    Abstract This paper proposes a novel controller design approach for nonlinear plants. A class of stable nonlinear systems with a similar skew-symmetric structure is chosen as the objective closed loop system, and two design methods are proposed with backstepping and direct construction. Compared with the conventional backstepping method, the proposed backstepping method need not construct a Lyapunov function step by step, thus the design procedure is simplified. The direct construction method can be applied to some nonlinear plants for which the conventional backstepping is not feasible; and the design can be accomplished in only one step. Furthermore, for some nonlinear plants which have a lower triangular structure with two subsystems, simpler controllers can be derived by the proposed direct construction method than those derived by backstepping design. In addition, the proposed methods are both system structure oriented, therefore their designs are more intuitive than the conventional backstepping design. Two controllers are derived for satellite attitude control by employing the proposed methods; simulation results demonstrate their effectiveness. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]

    Strategies in sentential reasoning

    Jean-Baptiste Van der Henst
    Abstract Four experiments examined the strategies that individuals develop in sentential reasoning. They led to the discovery of five different strategies. According to the theory proposed in the paper, each of the strategies depends on component tactics, which all normal adults possess, and which are based on mental models. Reasoners vary their use of tactics in ways that have no deterministic account. This variation leads different individuals to assemble different strategies, which include the construction of incremental diagrams corresponding to mental models, and the pursuit of the consequences of a single model step by step. Moreover, the difficulty of a problem (i.e., the number of mental models required by the premises) predisposes reasoners towards certain strategies. Likewise, the sentential connectives in the premises also bias reasoners towards certain strategies, e.g., conditional premises tend to elicit reasoning step by step whereas disjunctive premises tend to elicit incremental diagrams. [source]