Home About us Contact | |||
First Used (first + used)
Selected AbstractsIdentifying young people who drink too much: the clinical utility of the five-item Alcohol Use Disorders Identification Test (AUDIT)DRUG AND ALCOHOL REVIEW, Issue 1 2001HELEN MILES Researcher Abstract The current study investigated the patterns and consequences of alcohol use among young people and their perceptions of associate health risk, and explored the clinical utility of the five-item version of the Alcohol Use Disorders Identification Test (AUDIT) in screening young people for hazardous drinking. A cross-sectional sample of 393 young people aged 16,19 years were accessed through two tertiary colleges in South London and self-completed an anonymous, confidential questionnaire recording the five-item AUDIT, patterns of alcohol consumption, hazardous consequences and perception of associate health risk. Over 90% of the sample reported drinking alcohol regularly, commonly excessive weekend use and related physical, psychological and social consequences. A significant minority (20.4% of males, 18.0% of females) reported consumption of alcohol in excess of UK recommended limits, while almost a third (34.2% of males, 30.2% of females) reported scores in the ,hazardous' range of the five-item AUDIT. However, the majority had little perception of associate health risk, perceiving their use to be ,light' and unproblematic. Only one in 10 of those drinking at ,hazardous' levels recognized their alcohol use as problematic, most believing the hazardous consequences of this use were acceptable. Self-reported patterns of alcohol consumption (except age first used) and total number of psychological and social hazardous consequences were found to significantly predict AUDIT scores using linear regression analysis. Therefore the five-item AUDIT appears to have predictive validity, reflecting self-reported alcohol consumption, perception of associate health risk and hazardous consequences among young people. It is concluded that it may consequently have clinical utility as a simple screening tool (suitable for use by a variety of professionals in contact with young people) for the identification of hazardous alcohol consumption among this population. [source] Volume Reduction Surgery for End-Stage Ischemic Heart DiseaseECHOCARDIOGRAPHY, Issue 7 2002Takahiro Shiota M.D. The Dor procedure, or infarction excision surgery, was first used in 1984. It is a surgical treatment option for patients with end-stage ischemic heart failure. In a recently published multicenter study that included a total of 439 patients, average ejection fraction increased from 29 ± 10% to 39 ± 12% after surgery. In our experience, the overall survival rate 18 months after surgery is 89%, and the preoperative mortality rate is 6.6%. These results are similar to the previous reports from Dor,s group, which confirmed the certain value of the surgery. Echocardiography, including intraoperative transesophageal echocardiography, plays an important role in clarifying cardiac anatomies, absolute left ventricular (LV) volumes, ejection fraction, and mitral regurgitation in patients with ischemic heart failure undergoing this surgery. With the development of ultrasound and computer technology, three-dimensional echocardiography may be preferred when evaluating the surgical results, including determination of absolute LV volumes. Communication between experienced cardiac surgeons and echocardiographers in the operating room is essential for successful outcomes and reliable evaluation of the surgery. [source] Prediction of municipal solid waste generation with combination of support vector machine and principal component analysis: A case study of MashhadENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 2 2009R. Noori Abstract Quantity prediction of municipal solid waste (MSW) is crucial for design and programming municipal solid waste management system (MSWMS). Because effect of various parameters on MSW quantity and its high fluctuation, prediction of generated MSW is a difficult task that can lead to enormous error. The works presented here involve developing an improved support vector machine (SVM) model, which combines the principal component analysis (PCA) technique with the SVM to forecast the weekly generated waste of Mashhad city. In this study, the PCA technique was first used to reduce and orthogonalize the original input variables (data). Then these treated data were used as new input variables in SVM model. This improved model was evaluated by using weekly time series of waste generation (WG) and the number of trucks that carry waste in week of t. These data have been collected from 2005 to 2008. By comparing the predicted WG with the observed data, the effectiveness of the proposed model was verified. Therefore, in authors' opinion, the model presented in this article is a potential tool for predicting WG and has advantages over the traditional SVM model. © 2008 American Institute of Chemical Engineers Environ Prog, 2009 [source] A methodology for simulating power system vulnerability to cascading failures in the steady stateEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 8 2008Murali Kumbale Abstract Simulations of power system conditions and event sequences leading to either local or widespread blackout has acquired increasing importance following wide-impact network collapses that have occurred over the past several years in North America and Europe. This paper summarizes an analytical framework that has been developed, implemented, and practically used by Southern Company to evaluate system vulnerability to cascading failures using the steady state model. This methodology was first used by Southern Company in 1999. The studies performed at Southern Company have already influenced and motivated certain transmission development projects. Future improvements to the method could include better modeling and sequencing of cascading steps, including time sequence of failures using equipment response time. Significant interest exists in developing preventive methods and procedures and in the application of this technology in the infrastructure security arena. Copyright © 2008 John Wiley & Sons, Ltd. [source] Molecular mechanism of DNA replication-coupled inactivation of the initiator protein in Escherichia coli: interaction of DnaA with the sliding clamp-loaded DNA and the sliding clamp-Hda complexGENES TO CELLS, Issue 6 2004Masayuki Su'etsugu In Escherichia coli, the ATP-DnaA protein initiates chromosomal replication. After the DNA polymerase III holoenzyme is loaded on to DNA, DnaA-bound ATP is hydrolysed in a manner depending on Hda protein and the DNA-loaded form of the DNA polymerase III sliding clamp subunit, which yields ADP-DnaA, an inactivated form for initiation. This regulatory DnaA-inactivation represses extra initiation events. In this study, in vitro replication intermediates and structured DNA mimicking replicational intermediates were first used to identify structural prerequisites in the process of DnaA-ATP hydrolysis. Unlike duplex DNA loaded with sliding clamps, primer RNA-DNA heteroduplexes loaded with clamps were not associated with DnaA-ATP hydrolysis, and duplex DNA provided in trans did not rescue this defect. At least 40-bp duplex DNA is competent for the DnaA-ATP hydrolysis when a single clamp was loaded. The DnaA-ATP hydrolysis was inhibited when ATP-DnaA was tightly bound to a DnaA box-bearing oligonucleotide. These results imply that the DnaA-ATP hydrolysis involves the direct interaction of ATP-DnaA with duplex DNA flanking the sliding clamp. Furthermore, Hda protein formed a stable complex with the sliding clamp. Based on these, we suggest a mechanical basis in the DnaA-inactivation that ATP-DnaA interacts with the Hda-clamp complex with the aid of DNA binding. [source] Evidence for prolonged clinical benefit from initial combination antiretroviral therapy: Delta extended follow-upHIV MEDICINE, Issue 3 2001Delta Coordinating Committee Background The findings from therapeutic trials in HIV infection with surrogate endpoints based on laboratory markers are only partially relevant for clinical decisions on treatment. Although the collection of clinical follow-up data from such a trial would be relatively straightforward, this rarely occurs. An important reason for this may be the perception that such data have little value because the number of participants remaining on their original allocated therapy has usually fallen substantially. Methods Delta was an international, multicentre trial in which 3207 HIV infected individuals were randomly allocated to treatment with zidovudine (ZDV) alone, ZDV combined with didanosine (ddI) or ZDV combined with zalcitabine (ddC). Although the trial closed in September 1995, information on vital status, AIDS events, treatment changes and CD4 counts was still collected every 12 months until at least March 1997. This has allowed analyses of the longer term clinical effect of treatment. Results The median follow-up to date of death or last known vital status was 43 months (10th percentile 18 months; 90th percentile 55 months). The proportion of participants remaining on their allocated treatment fell steadily over time; by 4 years after trial entry, 3% remained on ZDV, 20% on ZDV + ddI and 21% on ZDV + ddC. Changes mainly involved the stopping, addition or switching of a nucleoside reverse transcriptase inhibitor (NRTIs). There was little use of protease inhibitors (PIs) or non-nucleoside reverse transcriptase inhibitors (NNRTIs) before the third year of the trial. Between the third and fourth years, regimens included a drug from one of these classes for approximately 17% of person-time in all treatment groups. Relative to ZDV monotherapy, the beneficial effects of combination therapy on mortality and disease progression rates increased significantly with time since randomization. The maximum effects on mortality were observed between 2 and 3 years, with a 48% reduction for ZDV + ddI and a 26% reduction for ZDV + ddC. These rates were observed when the original allocated treatment was received 42% and 47% of the time in the ZDV + ddI and ZDV + ddC groups, respectively. The mean CD4 count remained significantly higher (approximately 50 cells/,L) in the combination therapy groups 4 years after randomization, suggesting a projection of a clinical benefit beyond this time point. Conclusions The sustained clinical effect of the initial allocation to combination therapy, particularly ZDV + ddI, was remarkable in light of the convergence of drug regimens actually received across the three treatment groups. Interpretation of this finding is not straightforward. One of the possible explanations is that the effectiveness of ddI and ddC is diminished if first used later in infection or with greater prior exposure to ZDV, although the data do not clearly support either hypothesis. This analysis highlights the value of long-term clinical follow-up of therapeutic trials in HIV infection, which should be considered in the planning of all new studies. [source] Mapping the time course of nonconscious and conscious perception of fear: An integration of central and peripheral measuresHUMAN BRAIN MAPPING, Issue 2 2004Leanne M. Williams Abstract Neuroimaging studies using backward masking suggest that conscious and nonconscious responses to complex signals of fear (facial expressions) occur via parallel cortical and subcortical circuits. Little is known, however, about the temporal differentiation of these responses. Psychophysics procedures were first used to determine objective thresholds for both nonconscious detection (face vs. blank screen) and discrimination (fear vs. neutral face) in a backward masking paradigm. Event-related potentials (ERPs) were then recorded (n = 20) using these thresholds. Ten blocks of masked fear and neutral faces were presented under each threshold condition. Simultaneously recorded skin conductance responses (SCRs) provided an independent index of stimulus perception. It was found that Fear stimuli evoked faster SCR rise times than did neutral stimuli across all conditions, indicating that emotional content influenced responses, regardless of awareness. In the first 400 msec of processing, ERPs dissociated the time course of conscious (enhanced N4 component) from nonconscious (enhanced N2 component) perception of fear, relative to neutral. Nonconscious detection of fear also elicited relatively faster P1 responses within 100 msec post-stimulus. The N2 may provide a temporal correlate of the initial sensory processing of salient facial configurations, which is enhanced when top-down cortical feedback is precluded. By contrast, the N4 may index the conscious integration of emotion stimuli in working memory, subserved by greater cortical engagement. Hum. Brain Mapping 21:64,74, 2004. © 2003 Wiley-Liss, Inc. [source] A collocated, iterative fractional-step method for incompressible large eddy simulationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2008Giridhar Jothiprasad Abstract Fractional-step methods are commonly used for the time-accurate solution of incompressible Navier,Stokes (NS) equations. In this paper, a popular fractional-step method that uses pressure corrections in the projection step and its iterative variants are investigated using block-matrix analysis and an improved algorithm with reduced computational cost is developed. Since the governing equations for large eddy simulation (LES) using linear eddy-viscosity-based sub-grid models are similar in form to the incompressible NS equations, the improved algorithm is implemented in a parallel LES solver. A collocated grid layout is preferred for ease of extension to curvilinear grids. The analyzed fractional-step methods are viewed as an iterative approximation to a temporally second-order discretization. At each iteration, a linear system that has an easier block-LU decomposition compared with the original system is inverted. In order to improve the numerical efficiency and parallel performance, modified ADI sub-iterations are used in the velocity step of each iteration. Block-matrix analysis is first used to determine the number of iterations required to reduce the iterative error to the discretization error of. Next, the computational cost is reduced through the use of a reduced stencil for the pressure Poisson equation (PPE). Energy-conserving, spatially fourth-order discretizations result in a 7-point stencil in each direction for the PPE. A smaller 5-point stencil is achieved by using a second-order spatial discretization for the pressure gradient operator correcting the volume fluxes. This is shown not to reduce the spatial accuracy of the scheme, and a fourth-order continuity equation is still satisfied to machine precision. The above results are verified in three flow problems including LES of a temporal mixing layer. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modelling the dynamics of log-domain circuitsINTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 1 2007Alon Ascoli Abstract Log-domain filters are an intriguing form of externally linear, internally nonlinear current-mode circuits, in which a compression stage is first used to convert the input currents to the logarithmic domain, then analogue processing is carried out on the resulting voltages, and finally input,output linearity is restored by mapping the output voltages to current form through an expansion stage. The compressing and expanding operations confer on log-domain filters a number of desirable features, but they may be responsible for the loss of external linearity. In this paper, sufficient conditions for the external linearity of log-domain LC-ladders are established, and the local nature of this external linearity is highlighted. Certain log-domain LC-ladders employing floating capacitors may exhibit externally nonlinear behaviour even for zero input and very small initial conditions. We show how transistor parasitic capacitances are central to the emergence of this behaviour, and must be incorporated in the circuit model. Copyright © 2006 John Wiley & Sons, Ltd. [source] Empirical orthogonal functions and related techniques in atmospheric science: A reviewINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 9 2007A. Hannachi Abstract Climate and weather constitute a typical example where high dimensional and complex phenomena meet. The atmospheric system is the result of highly complex interactions between many degrees of freedom or modes. In order to gain insight in understanding the dynamical/physical behaviour involved it is useful to attempt to understand their interactions in terms of a much smaller number of prominent modes of variability. This has led to the development by atmospheric researchers of methods that give a space display and a time display of large space-time atmospheric data. Empirical orthogonal functions (EOFs) were first used in meteorology in the late 1940s. The method, which decomposes a space-time field into spatial patterns and associated time indices, contributed much in advancing our knowledge of the atmosphere. However, since the atmosphere contains all sorts of features, e.g. stationary and propagating, EOFs are unable to provide a full picture. For example, EOFs tend, in general, to be difficult to interpret because of their geometric properties, such as their global feature, and their orthogonality in space and time. To obtain more localised features, modifications, e.g. rotated EOFs (REOFs), have been introduced. At the same time, because these methods cannot deal with propagating features, since they only use spatial correlation of the field, it was necessary to use both spatial and time information in order to identify such features. Extended and complex EOFs were introduced to serve that purpose. Because of the importance of EOFs and closely related methods in atmospheric science, and because the existing reviews of the subject are slightly out of date, there seems to be a need to update our knowledge by including new developments that could not be presented in previous reviews. This review proposes to achieve precisely this goal. The basic theory of the main types of EOFs is reviewed, and a wide range of applications using various data sets are also provided. Copyright © 2007 Royal Meteorological Society [source] Consumer attitudes and acceptance of genetically modified organisms in KoreaINTERNATIONAL JOURNAL OF CONSUMER STUDIES, Issue 3 2003Hyochung Kim Genetically modified organisms (GMOs) were first used to designate micro organisms that had had genes from other species transferred into their genetic material by the then-new techniques of ,gene-splicing.' Cultivation of GMOs has so far been most widespread in the production of soybeans and maize. The United States holds almost three-fourths of the total crop area devoted to GMOs. Because many crops have been imported from the US, there is a large possibility for consumers to intake the products of GMOs in Korea. The safety of GMOs is not scientifically settled at this time, however. Additionally, the research regarding the GMOs issue of consumers has rarely been conducted in Korea. This study therefore focused on the consumer attitudes about GMOs and willingness to purchase them. The data were collected from 506 adults living in Seoul, Daegu and Busan, Korea, by means of a self-administered questionnaire. Frequencies and chi-square tests were conducted by SPSS. The results of the survey were as follows. First, the consumer concerns about GMOs were high but recognition was low; many respondents answered they did not have exact information about GMOs, although they had heard about them. Second, almost 93% of the respondents desired the labelling of GMOs. Third, the level of acceptance of GMOs was high; two-thirds of the respondents showed that they were willing to buy GMOs. Finally, many respondents worried about the safety of GMOs in that 73% of the respondents primarily wanted to be informed about safety of GMOs. This study suggests that the consumer education about GMOs should be conducted through mass media and consumer protection organisations. [source] Evidence that thalidomide modifies the immune response of patients suffering from actinic prurigoINTERNATIONAL JOURNAL OF DERMATOLOGY, Issue 12 2004Iris Estrada-G PhD Background, Actinic prurigo (AP) is a photodermatosis with a restricted ethnic distribution, mainly affecting Mestizo women (mixed Indian and European). The lesions are polymorphic and include macules, papules, crusts, hyperpigmentation and lichenification. Thalidomide, an effective immunomodulatory drug, was first used successfully to treat AP in 1973. In this work we describe the effect that thalidomide had on TNF-, sera levels and on IL-4- and IFN gamma (IFN,)-producing lymphocytes of actinic prurigo (AP) patients. Methods, Actinic prurigo patients were analyzed before and after thalidomide treatment. The percentage of IL-4+ or IFN,+ CD3+ lymphocytes was analyzed in eight of them by flow cytometry. TNF, in sera was measured by ELISA in 11 patients. Results, A direct correlation was observed between resolution of AP lesions and an increase in IFN,+ CD3+ peripheral blood mononuclear cells (P , 0.001) and a decrease in TNF, serum levels (no statistical difference). No IL-4+ CD3+ cells were detected. Conclusions, Our findings confirm that AP is a disease that has an immunological component and that thalidomide clinical efficacy is exerted not only through inhibition of TNF, synthesis, but also through modulation of INF,-producing CD3+ cells. These cells could be used as clinical markers for recovery. [source] SAR imaging using multidimensional continuous wavelet transform and applications to polarimetry and interferometryINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2004E. Colin Abstract Usual SAR imaging process makes the assumption that the reflectors are isotropic and white (i.e., they behave in the same way regardless the angle from which they are viewed and the emitted frequency within the bandwidth). The multidimensional continuous wavelet transform (CWT) in radar imaging was initially developed to highlight the image degradations due to these assumptions. In this article the wavelet transform method is widened to polarimetry and interferometry fields. The wavelet tool is first used for polarimetric image enhancement, then for coherence optimization in interferometry. This optimization by wavelets, compared with the polarimetric one, gives better results on the coherence level. Finally, a combination of both methods is proposed. © 2005 Wiley Periodicals Inc. Int J Imaging Syst Technol, 14, 206,212, 2004; Published online in Wiley Inter-Science (www.interscience.wiley.com). DOI 10.1002/ima.20025 [source] The Novacor Left Ventricular Assist System: Clinical Experience from the Novacor RegistryJOURNAL OF CARDIAC SURGERY, Issue 4 2001F. Dagenais The electrically powered Novacor left ventricular assist (LVAS) system was first used clinically as a bridge to transplant in 1984. The configuration has evolved to the current wearable model used clinically for the first time in 1993. In 1998, the inflow conduit was modified, reducing embolic events by 50%. Over 1100 implants have been performed worldwide with cumulative support greater than 300 patient years, and only 0.7% requiring replacement. The Novacor is a safe and effective device for bridge to transplant, bridge to recovery, or potentially permanent implant with reliable long-term support for periods long as 4 years. [source] Bridge to Transplant with the HeartMate DeviceJOURNAL OF CARDIAC SURGERY, Issue 4 2001William Piccione Jr. The incidence and prevalence of chronic heart failure continues to increase, with an estimated 400,000 new cases per year in the United States. Cardiac transplantation is an effective therapy but is severely limited to approximately 2300 patients per year due to the donor shortage. With ever increasing waiting times, a significant number of patients become severely debilitated or expire prior to transplantation. A mechanical circulatory support device was first used as a "bridge to transplantation" in 1969. Since then, mechanical devices have increased tremendously in reliability and efficaciousness. The HeartMate left ventricular assist device (LVAD) has been utilized extensively in a bridge to transplant application with excellent results. Patients refractory to aggressive medical management can be sustained reliably until transplantation. In addition, bridging allows for the correction of physiologic and metabolic dearrangements often seen in these severely ill patients prior to transplantation. Nutritional, economic, and quality-of-life issues also favor earlier LVAD placement in refractory patients. This report summarizes the overall bridging experience with the HeartMate LVAD and focuses on our experience with this device at Rush-Presbyterian-St. Luke's Medical Center. [source] Identification of proteins directly from tissue: in situ tryptic digestions coupled with imaging mass spectrometryJOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 2 2007M. Reid Groseclose Abstract A novel method for on-tissue identification of proteins in spatially discrete regions is described using tryptic digestion followed by matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS) with MS/MS analysis. IMS is first used to reveal the protein and peptide spatial distribution in a tissue section and then a serial section is robotically spotted with small volumes of trypsin solution to carry out in situ protease digestion. After hydrolysis, 2,5-Dihydroxybenzoic acid (DHB) matrix solution is applied to the digested spots, with subsequent analysis by IMS to reveal the spatial distribution of the various tryptic fragments. Sequence determination of the tryptic fragments is performed using on-tissue MALDI MS/MS analysis directly from the individual digest spots. This protocol enables protein identification directly from tissue while preserving the spatial integrity of the tissue sample. The procedure is demonstrated with the identification of several proteins in the coronal sections of a rat brain. Copyright © 2007 John Wiley & Sons, Ltd. [source] Pore-scale simulations of unsteady flow and heat transfer in tubular fixed bedsAICHE JOURNAL, Issue 4 2009P. Magnico Abstract Small tube-to-particle-diameter ratio induces a radial heterogeneity in tubular fixed beds on the particle scale. In this complex topology, theoretical models fail to predict wall-to-fluid heat transfer. In order to be more realistic, a deterministic Bennett method is first used to synthesize two packings with a tube-to-sphere-diameter ratio of 5.96 and 7.8, containing 236 and 620 spheres, respectively. In a second step, unsteady velocity and temperature fields are computed by CFD. In the range of Reynolds number lying between 80 and 160, hydrodynamic results are validated with experimental data. The thermal disequilibrium in the near-wall region is described in detail. Several pseudo-homogeneous models are compared to the numerical simulations. The radial and axial profiles of temperature show a clear agreement with the model of Schlünder's research group and the model of Martin and Nilles. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Formation of monodisperse microbubbles in a microfluidic deviceAICHE JOURNAL, Issue 6 2006J. H. Xu Abstract The crossflowing rupture technique was first used in a microfluidic device to prepare microbubbles, and successfully prepared monodisperse microbubbles with polydispersity index (,) values of <2%. The parameters affecting the microbubble-formation process, such as two-phase flow rates, continuous-phase viscosity, surface tension, and surfactants were investigated. The microbubble-formation mechanisms of the crossflowing rupture technique with those of the techniques of both flow-focusing rupture and geometry-dominated breakup were also compared. It was also found that the bubble size decreased with increasing continuous-phase rate and its viscosity, while independent of surface tension. The different species of surfactants also influenced the microbubble-formation process. Moreover, the bubble-formation mechanism by using the crossflow rupture technique was different from the techniques of both hydrodynamic flow focusing and geometry-dominated breakup. The microbubble-formation process using the crossflowing rupture technique is controllable. © 2006 American Institute of Chemical Engineers AIChE J, 2006 [source] An unsupervised classification method of uterine electromyography signals: Classification for detection of preterm deliveriesJOURNAL OF OBSTETRICS AND GYNAECOLOGY RESEARCH (ELECTRONIC), Issue 1 2009M. O. Diab Abstract Aim:, This article proposes an unsupervised classification method that can be applied to the electromyography signal of uterine contractions for the detection of preterm birth. Methods:, The frequency content of the electromyography changes from one woman to another, and during pregnancy, so wavelet decomposition is first used to extract the parameters of each contraction, and an unsupervised statistical classification method based on Fisher's test is used to classify the events. A principal component analysis projection is then used as evidence of the groups resulting from this classification. Another method of classification based on a competitive neural network is also applied on the same signals. Both methods are compared. Results:, Results show that uterine contractions may be classified into independent groups according to their frequency content and according to term (either at recording or at delivery). [source] Reverse ATRP process of acrylonitrile in the presence of ionic liquidsJOURNAL OF POLYMER SCIENCE (IN TWO SECTIONS), Issue 8 2008Chen Hou Abstract An ionic liquid, 1-butyl-3-methylimidazolium tetrafluoroborate ([C4mim] [BF4]), was first used as the solvent in azobisisobutyronitrile (AIBN)-initiated reverse atom transfer radical polymerization (RATRP) of acrylonitrile with FeCl3/succinic acid (SA) as the catalyst system. The polymerization in [C4mim][BF4] proceeded in a well-controlled manner as evidenced by kinetic studies. Compared with the polymerization in bulk, the polymerization in [C4mim][BF4] not only showed the best control of molecular weight and its distribution but also provided rather rapid reaction rate with the ratio of [C4mim][BF4] at 200:1:2:4. The polymerization apparent activation energies in [C4mim][BF4] and bulk were calculated to be 48.2 and 55.7 kJ mol,1, respectively. Polyacrylonitrile obtained was successfully used as a macroinitiator to proceed the chain extension polymerization in [C4mim][BF4] via a conventional ATRP process. [C4mim][BF4] and the catalyst system could be easily recycled and reused after simple purification and had no effect on the living nature of polymerization. © 2008 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 46: 2701,2707, 2008 [source] Preparation of a monolithic column for weak cation exchange chromatography and its application in the separation of biopolymersJOURNAL OF SEPARATION SCIENCE, JSS, Issue 1 2006Yinmao Wei Abstract A procedure for the preparation of a monolithic column for weak cation exchange chromatography was presented. The structure of the monolithic column was evaluated by mercury intrusion. The hydrodynamic and chromatographic properties of the monolithic column , such as back pressures at different flow rates, effects of pH on protein retention, dynamic loading capacity, recovery, and stability , were determined under conditions typical for ion-exchange chromatography. The prepared monolithic column might be used in a relatively broad pH range from 4.0 to 12.0 and exhibited an excellent separation to five proteins at the flow rates of both 1.0 and 8.0 mL/min, respectively. In addition, the prepared column was first used in the purification and simultaneous renaturation of recombinant human interferon gamma (rhIFN-,) in the extract solution with 7.0 mol/L guanidine hydrochloride. The purity and specific bioactivity of the purified rhIFN-, in only one chromatographic step were obtained to be 93% and 7.8×107 IU/mg, respectively. [source] Diffraction-based automated crystal centeringJOURNAL OF SYNCHROTRON RADIATION, Issue 2 2007Jinhu Song A fully automated procedure for detecting and centering protein crystals in the X-ray beam of a macromolecular crystallography beamline has been developed. A cryo-loop centering routine that analyzes video images with an edge detection algorithm is first used to determine the dimensions of the loop holding the sample; then low-dose X-rays are used to record diffraction images in a grid over the edge and face plane of the loop. A three-dimensional profile of the crystal based on the number of diffraction spots in each image is constructed. The derived center of mass is then used to align the crystal to the X-ray beam. Typical samples can be accurately aligned in ,2,3,min. Because the procedure is based on the number of `good' spots as determined by the program Spotfinder, the best diffracting part of the crystal is aligned to the X-ray beam. [source] STORMFLOW SIMULATION USING A GEOGRAPHICAL INFORMATION SYSTEM WITH A DISTRIBUTED APPROACH,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 4 2001Zhongbo Yu ABSTRACT: With the increasing availability of digital and remotely sensed data such as land use, soil texture, and digital elevation models (DEMs), geographic information systems (GIS) have become an indispensable tool in preprocessing data sets for watershed hydrologic modeling and post processing simulation results. However, model inputs and outputs must be transferred between the model and the GIS. These transfers can be greatly simplified by incorporating the model itself into the GIS environment. To this end, a simple hydrologic model, which incorporates the curve number method of rainfall-runoff partitioning, the ground-water base-flow routine, and the Muskingum flow routing procedure, was implemented on the GIS. The model interfaces directly with stream network, flow direction, and watershed boundary data generated using standard GIS terrain analysis tools; and while the model is running, various data layers may be viewed at each time step using the full display capabilities. The terrain analysis tools were first used to delineate the drainage basins and stream networks for the Susquehanna River. Then the model was used to simulate the hydrologic response of the Upper West Branch of the Susquehanna to two different storms. The simulated streamflow hydrographs compare well with the observed hydrographs at the basin outlet. [source] Skeletal tissue engineering using embryonic stem cellsJOURNAL OF TISSUE ENGINEERING AND REGENERATIVE MEDICINE, Issue 3 2010Jojanneke M. Jukes Abstract Various cell types have been investigated as candidate cell sources for cartilage and bone tissue engineering. In this review, we focused on chondrogenic and osteogenic differentiation of mouse and human embryonic stem cells (ESCs) and their potential in cartilage and bone tissue engineering. A decade ago, mouse ESCs were first used as a model to study cartilage and bone development and essential genes, factors and conditions for chondrogenesis and osteogenesis were unravelled. This knowledge, combined with data from the differentiation of adult stem cells, led to successful chondrogenic and osteogenic differentiation of mouse ESCs and later also human ESCs. Next, researchers focused on the use of ESCs for skeletal tissue engineering. Cartilage and bone tissue was formed in vivo using ESCs. However, the amount, homogeneity and stability of the cartilage and bone formed were still insufficient for clinical application. The current protocols require improvement not only in differentiation efficiency but also in ESC-specific hurdles, such as tumourigenicity and immunorejection. In addition, some of the general tissue engineering challenges, such as cell seeding and nutrient limitation in larger constructs, will also apply for ESCs. In conclusion, there are still many challenges, but there is potential for ESCs in skeletal tissue engineering. Copyright © 2009 John Wiley & Sons, Ltd. [source] The Principal Components of Growth in the Less Developed CountriesKYKLOS INTERNATIONAL REVIEW OF SOCIAL SCIENCES, Issue 4 2008Derek Headey SUMMMARY This paper re-examines the international evidence on the sources of growth in less developed countries (LDCs) using exploratory factor analysis (EFA). Although EFA was first used in the development context by Adelman and Morris (1967) it has rarely been used since, despite being ideally suited to a context in which a large number of latent factors have been hypothesized to determine growth, and in which an even greater number of imperfectly measured and multicollinear proxies have been used to measure these latent factors. This paper uses EFA to minimize these problems of omitted variables biases, multicollinearity and measurement error, by reducing a large array of hypothesized growth determinants into a parsimonious and non-collinear set of composite indices. The paper then provides theoretical interpretations of the derived indices, tests their statistical significance and quantitative importance in otherwise conventional growth regressions, and uses these results to reappraise the usefulness of cross-country empirics in deriving robust, policy-relevant knowledge of the principal components of growth in LDCs, including the so called ,economic miracles'. [source] The Aspect Hypothesis: Development of Morphology and Appropriateness of UseLANGUAGE LEARNING, Issue 2 2006Llorenç Comajoan According to the aspect hypothesis (Andersen & Shirai, 1996; Bardovi-Harlig, 2000), perfective morphology emerges before imperfective morphology, it is first used in telic predicates (achievements and accomplishments) and it later extends to atelic predicates (activities and states). The opposite development is hypothesized for imperfective morphology. This study proposes to investigate the emergence of preterite and imperfect morphology in Catalan to examine if the aspectual characteristics of predicates can account for the emergence of morphology and also appropriate use. Past verbal forms in narratives produced by three multilingual learners of Catalan as a foreign language were coded for appropriateness of use, morphology, and lexical aspect. An aspectual analysis of the data provided support for the aspect hypothesis, because achievement and accomplishment predicates in general were inflected for preterite morphology more frequently than were activity and state predicates, and the opposite was found for the emergence of imperfect morphology. The aspectual trends, however, varied for individual learners, tasks, and developmental stages. An analysis of the appropriate use of preterite and imperfect forms showed that morphology was used appropriately in almost all contexts. Prototypical combinations of morphology and aspect tended to be used more appropriately than nonprototypical combinations, as supported by other studies (Cadierno, 2000; Camps, 2002; Giacalone-Ramat, 2002). [source] Cluster analysis of BOLD fMRI time series in tumors to study the heterogeneity of hemodynamic response to treatmentMAGNETIC RESONANCE IN MEDICINE, Issue 6 2003Christine Baudelet Abstract BOLD-contrast functional MRI (fMRI) has been used to assess the evolution of tumor oxygenation and blood flow after treatment. The aim of this study was to evaluate K-means-based cluster analysis as a exploratory, data-driven method. The advantage of this approach is that it can be used to extract information without the need for prior knowledge concerning the hemodynamic response function. Two data sets were acquired to illustrate different types of BOLD fMRI response inside tumors: the first set following a respiratory challenge with carbogen, and the second after pharmacological modulation of tumor blood flow using flunarizine. To improve the efficiency of the clustering, a power density spectrum analysis was first used to isolate voxels for which signal changes did not originate from noise or linear drift. The technique presented here can be used to assess hemodynamic response to treatment, and especially to display areas of the tumor with heterogeneous responses. Magn Reson Med 49:985,990, 2003. © 2003 Wiley-Liss, Inc. [source] Modeling the operation of multireservoir systems using decomposition and stochastic dynamic programmingNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2006T.W. Archibald Abstract Stochastic dynamic programming models are attractive for multireservoir control problems because they allow non-linear features to be incorporated and changes in hydrological conditions to be modeled as Markov processes. However, with the exception of the simplest cases, these models are computationally intractable because of the high dimension of the state and action spaces involved. This paper proposes a new method of determining an operating policy for a multireservoir control problem that uses stochastic dynamic programming, but is practical for systems with many reservoirs. Decomposition is first used to reduce the problem to a number of independent subproblems. Each subproblem is formulated as a low-dimensional stochastic dynamic program and solved to determine the operating policy for one of the reservoirs in the system. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source] Simulation of fine particle formation by precipitation using computational fluid dynamicsTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2000Damien Piton Abstract The 4-environment generalized micromixing (4-EGM) model is applied to describe turbulent mixing and precipitation of barium sulfate in a tubular reactor. The model is implemented in the commercial computational fluid dynamics (CFD) software Fluent. The CFD code is first used to solve for the hydrodynamic fields (velocity, turbulence kinetic energy, turbulent energy dissipation). The species concentrations and moments of the crystal size distribution (CSD) are then computed using user-defined transport equations. CFD simulations are performed for the tubular reactor used in an earlier experimental study of barium sulfate precipitation. The 4-EGM CFD results are shown to compare favourably to CFD results found using the presumed beta PDF model. The latter has previously been shown to yield good agreement with experimental data for the mean crystal size at the outlet of the tubular reactor. On a appliqué un modéle de micromélange généralisé à 4 environnements (4-EGM) afin de décrire le mélange turbulent et la précipitation du sulfate de baryum dans un réacteur tubulaire. Ce modéle a été implanté dans le logiciel de CFD commercial Fluent. Le programme de CFD est d'abord utilisé pour calculer les champs hydrodynamiques (vitesse, énergie cinétique de turbulence, dissipation d'énergie turbulente). Les concentrations d'espéces et les moments de la distribution de taille des cristaux (CSD) sont ensuite calculés par ordinateur à l'aide des équations de transport définies par l'usager. Des simulations de CFD sont réalisées pour le réacteur tubulaire utilisé dans une étude expérimentale antérieure de la précipitation du sulfate de baryum. On montre que les prédictions du 4-EGM se comparent favorablement à celles du modéle béta PDF. II a été montré antérieurement que ce dernier présentait un bon accord avec les donnés expérimentales pour la taille moyenne des cristaux à la sortie du récteur tubulaire. [source] Historical Review of Penile Prosthesis Design and Surgical Techniques: Part 1 of a Three-Part Review Series on Penile Prosthetic SurgeryTHE JOURNAL OF SEXUAL MEDICINE, Issue 3 2009Gerard D. Henry MD ABSTRACT Introduction., Throughout history, many attempts to cure complete impotence have been recorded. Early attempts at a surgical approach involved the placement of rigid devices to support the natural process of erection formation. However, these early attempts placed the devices outside of the corpora cavernosa, with high rates of erosion and infection. Today, most urologists in the United States now place an inflatable penile prosthesis (IPP) with an antibiotic coating inside the tunica albuginea. Aim., The article describes the key historical landmarks in penile prosthesis design and surgical techniques. Methods., The article reviews and evaluates the published literature for important contributions to penile prosthesis design and surgical techniques. Main Outcome Measures., The article reviews and evaluates the historical landmarks in penile prosthesis design and surgical techniques that appear to improve outcomes and advance the field of prosthetic urology for the treatment of erectile dysfunction. Results., The current review demonstrates the stepwise progression starting with the use of stenting for achieving rigidity in the impotent patient. Modern advances were first used in war-injured patients which led to early implantation with foreign material. The design and techniques of penile prostheses placement have advanced such that now, more complications are linked to medical issues than failure of the implant. Conclusions., Today's IPPs have high patient satisfaction rates with low mechanical failure rates. Gerard D. Henry. Historical review of penile prosthesis design and surgical techniques: Part 1 of a three-part review series on penile prosthetic surgery. J Sex Med 2009;6:675,681. [source] |