Home About us Contact | |||
Model Used (model + used)
Selected AbstractsAn Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and ExperienceCOMPUTER GRAPHICS FORUM, Issue 1 2004V. Vinayagamoorthy Abstract This paper describes a behavioural model used to simulate realistic eye-gaze behaviour and body animations for avatars representing participants in a shared immersive virtual environment (IVE). The model was used in a study designed to explore the impact of avatar realism on the perceived quality of communication within a negotiation scenario. Our eye-gaze model was based on data and studies carried out on the behaviour of eye-gaze during face-to-face communication. The technical features of the model are reported here. Information about the motivation behind the study, experimental procedures and a full analysis of the results obtained are given in [17]. [source] Uncertainty and Sensitivity Analysis of Damage Identification Results Obtained Using Finite Element Model UpdatingCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2009Babak Moaveni The shake table tests were designed so as to damage the building progressively through several historical seismic motions reproduced on the shake table. A sensitivity-based finite element (FE) model updating method was used to identify damage in the building. The estimation uncertainty in the damage identification results was observed to be significant, which motivated the authors to perform, through numerical simulation, an uncertainty analysis on a set of damage identification results. This study investigates systematically the performance of FE model updating for damage identification. The damaged structure is simulated numerically through a change in stiffness in selected regions of a FE model of the shear wall test structure. The uncertainty of the identified damage (location and extent) due to variability of five input factors is quantified through analysis-of-variance (ANOVA) and meta-modeling. These five input factors are: (1,3) level of uncertainty in the (identified) modal parameters of each of the first three longitudinal modes, (4) spatial density of measurements (number of sensors), and (5) mesh size in the FE model used in the FE model updating procedure (a type of modeling error). A full factorial design of experiments is considered for these five input factors. In addition to ANOVA and meta-modeling, this study investigates the one-at-a-time sensitivity analysis of the identified damage to the level of uncertainty in the identified modal parameters of the first three longitudinal modes. The results of this investigation demonstrate that the level of confidence in the damage identification results obtained through FE model updating, is a function of not only the level of uncertainty in the identified modal parameters, but also choices made in the design of experiments (e.g., spatial density of measurements) and modeling errors (e.g., mesh size). Therefore, the experiments can be designed so that the more influential input factors (to the total uncertainty/variability of the damage identification results) are set at optimum levels so as to yield more accurate damage identification results. [source] Real-Time OD Estimation Using Automatic Vehicle Identification and Traffic Count DataCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2002Michael P. Dixon A key input to many advanced traffic management operations strategies are origin,destination (OD) matricies. In order to examine the possibility of estimating OD matricies in real-time, two constrained OD estimators, based on generalized least squares and Kalman filtering, were developed and tested. A one-at-a-time processing method was introduced to provide an efficient organized framework for incorporating observations from multiple data sources in real-time. The estimators were tested under different conditions based on the type of prior OD information available, the type of assignment available, and the type of link volume model used. The performance of the Kalman filter estimators also was compared to that of the generalized least squares estimator to provide insight regarding their performance characteristics relative to one another for given scenarios. Automatic vehicle identification (AVI) tag counts were used so that observed and estimated OD parameters could be compared. While the approach was motivated using AVI data, the methodology can be generalized to any situation where traffic counts are available and origin volumes can be estimated reliably. The primary means by which AVI data was utilized was through the incorporation of prior observed OD information as measurements, the inclusion of a deterministic link volume component that makes use of OD data extracted from the latest time interval from which all trips have been completed, and through the use of link choice proportions estimated based on link travel time data. It was found that utilizing prior observed OD data along with link counts improves estimator accuracy relative to OD estimation based exclusively on link counts. [source] Modeling Goodwill for Banks: A Residual Income Approach with Empirical Tests,CONTEMPORARY ACCOUNTING RESEARCH, Issue 1 2006Joy Begley Abstract This paper uses the residual income valuation technique outlined in Feltham and Ohlson 1996 to examine the relation between stock valuations and accounting numbers for a prototypical banking firm. Prior work of this nature typically assumes a manufacturing setting. This paper contributes to the prior research by clarifying how the approach can be extended to settings where value is created from financial assets and liabilities. Key elements of our model include allowing banks to generate positive net present value from either lending or borrowing activities, and allowing for accounting policy to affect valuation through the loan loss allowance. We validate our model using archival data analysis, and interpret coefficients in light of our modeling assumptions. These results suggest that banks create value more from deposit-taking activities than from lending activities. Vuong tests confirm that our model outperforms adaptations of the unbiased accounting model of Ohlson 1995 and adaptations of the base model proposed by Beaver, Eger, Ryan, and Wolfson 1989. However, our model is outperformed by the popular net income-book value model used in many empirical studies, and we can formally reject one of our key modeling assumptions. These tests of our model suggest future avenues for improving upon the theoretical analysis. [source] A Community Intervention by Firefighters to Increase 911 Calls and Aspirin Use for Chest PainACADEMIC EMERGENCY MEDICINE, Issue 4 2006Hendrika Meischke PhD Abstract Objectives: To test the effectiveness of an intervention, delivered face-to-face by local firefighters, designed to increase utilization of 911 and self-administration of aspirin for seniors experiencing chest pain. Methods: King County, Washington was divided into 126 geographically distinct areas that were randomized to intervention and control areas. A mailing list identified households of seniors within these areas. More than 20,000 homes in the intervention areas were contacted by local firefighters. Data on all 911 calls for chest pain and self-administration of aspirin were collected from the medical incident report form (MIRF). The unit of analysis was the area. Firefighters delivered a heart attack survival kit (that included an aspirin) and counseled participants on the importance of aspirin and 911 use for chest pain. Main outcome measures were 911 calls for chest pain and aspirin ingestion for a chest pain event, obtained from the MIRFs that are collected by emergency medical services personnel for 2 years after the intervention. Results: There were significantly more calls (16%) among seniors on the mailing list in the intervention than control areas in the first year after the intervention. Among the seniors who were not on the mailing list, there was little difference in the intervention and control areas. The results were somewhat sensitive to the analytical model used and to an outlier in the treatment group. Conclusions: A community-based firefighter intervention can be effective in increasing appropriate response to symptoms of a heart attack among elders. [source] Effect of variation of normal force on seismic performance of resilient sliding isolation systems in highway bridgesEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 15 2005Hirokazu Iemura Abstract In this study, a series of shaking table tests are carried out on scaled models of two seismically isolated highway bridges to investigate the effect of rocking motion and vertical acceleration on seismic performance of resilient sliding isolators. In addition, performance of RSI is compared with system having solely natural rubber bearings. Test results show that variation of normal force on sliders due to rocking effect and vertical acceleration makes no significant difference in response of RSI systems. In addition, analytical response of prototype isolated bridge and the model used in experiments is obtained analytically by using non-linear model for isolation systems. It is observed that for seismically isolated bridges, dynamic response of full-scale complex structures can be predicted with acceptable accuracy by experiments using a simple model of the structure. Copyright © 2005 John Wiley & Sons, Ltd. [source] Modeling missing binary outcome data in a successful web-based smokeless tobacco cessation programADDICTION, Issue 6 2010Keith Smolkowski ABSTRACT Aim To examine various methods to impute missing binary outcome from a web-based tobacco cessation intervention. Design The ChewFree randomized controlled trial used a two-arm design to compare tobacco abstinence at both the 3- and 6-month follow-up for participants randomized to either an enhanced web-based intervention condition or a basic information-only control condition. Setting Internet in the United States and Canada. Participants Secondary analyses focused upon 2523 participants in the ChewFree trial. Measurements Point-prevalence tobacco abstinence measured at 3- and 6-month follow-up. Findings The results of this study confirmed the findings for the original ChewFree trial and highlighted the use of different missing-data approaches to achieve intent-to-treat analyses when confronted with substantial attrition. The use of different imputation methods yielded results that differed in both the size of the estimated treatment effect and the standard errors. Conclusions The choice of imputation model used to analyze missing binary outcome data can affect substantially the size and statistical significance of the treatment effect. Without additional information about the missing cases, they can overestimate the effect of treatment. Multiple imputation methods are recommended, especially those that permit a sensitivity analysis of their impact. [source] Antiepileptogenesis and Seizure Prevention Trials with Antiepileptic Drugs: Meta-Analysis of Controlled TrialsEPILEPSIA, Issue 4 2001Nancy R. Temkin Summary: ,Purpose: To synthesize evidence concerning the effect of antiepileptic drugs (AEDs) for seizure prevention and to contrast their effectiveness for provoked versus unprovoked seizures. Methods: Medline, Embase, and The Cochrane Clinical Trials Register were the primary sources of trials, but all trials found were included. Minimal requirements: seizure-prevention outcome given as fraction of cases; AED or control assigned by random or quasi-random mechanism. Single abstracter. Aggregate relative risk and heterogeneity evaluated using Mantel,Haenszel analyses; random effects model used if heterogeneity was significant. Results: Forty-seven trials evaluated seven drugs or combinations for preventing seizures associated with fever, alcohol, malaria, perinatal asphyxia, contrast media, tumors, craniotomy, and traumatic brain injury. Effective: Phenobarbital for recurrence of febrile seizures [relative risk (RR), 0.51; 95% confidence interval (CI), 0.32,0.82) and cerebral malaria (RR, 0.36; CI, 0.23,0.56). Diazepam for contrast media,associated seizures (RR, 0.10; CI, 0.01,0.79). Phenytoin for provoked seizures after craniotomy or traumatic brain injury (craniotomy: RR, 0.42; CI, 0.25,0.71; TBI: RR, 0.33; CI, 0.19,0.59). Carbamazepine for provoked seizures after traumatic brain injury (RR, 0.39; CI, 0.17,0.92). Lorazepam for alcohol-related seizures (RR, 0.12; CI, 0.04,0.40). More than 25% reduction ruled out valproate for unprovoked seizures after traumatic brain injury (RR, 1.28; CI, 0.76,2.16), and carbamazepine for unprovoked seizures after craniotomy (RR, 1.30; CI, 0.75,2.25). Conclusions: Effective or promising results predominate for provoked (acute, symptomatic) seizures. For unprovoked (epileptic) seizures, no drug has been shown to be effective, and some have had a clinically important effect ruled out. [source] Managing the curriculum , for a changeEUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 2 2007M. Manogue Abstract:, This article reports the model used to design a new dental curriculum, the design process used and its underlying rationale. The evidence base for the process is reviewed and discussed. Some suggestions are offered for those engaged in developing new curricula. The main conclusions drawn are that the design process needs to be managed openly and democratically; the alignment model is the most appropriate for designing dental curricula; the process of curriculum design is inextricably linked to organisational development; and the concepts of learning organisations, communities of practice and culture all have their part to play in the process of introducing deep innovations, such as new curricula'. [source] Prevalence of cervical spinal pain in craniomandibular pain patientsEUROPEAN JOURNAL OF ORAL SCIENCES, Issue 2 2001Corine M. Visscher It has often been suggested that patients with a craniomandibular disorder (CMD) more often suffer from a cervical spine disorder (CSD) than persons without a CMD. However, in most studies no controlled, blind design was used, and conclusions were based on differing signs and symptoms. In this study, the recognition of CMD and CSD was based upon the presence of pain. The aim of this study was to determine the prevalence of cervical spinal pain in persons with or without craniomandibular pain, using a controlled, single-blind design. From 250 persons, a standardised oral history was taken, and a physical examination of the masticatory system and the neck was performed. Three classification models were used: one based on symptoms only; a second on signs only; and a third one based on a combination of symptoms and signs. The CMD patients were also subdivided in three subgroups: patients with mainly myogenous pain; mainly arthrogenous pain; and both myogenous and arthrogenous pain. Craniomandibular pain patients more often showed cervical spinal pain than persons without craniomandibular pain, independent of the classification model used. No difference in the prevalence of cervical spinal pain was found between the three subgroups of craniomandibular pain patients. [source] Scaling analysis of water retention curves for unsaturated sandy loam soils by using fractal geometryEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 3 2010C. Fallico Fractal geometry was deployed to analyse water retention curves (WRC). The three models used to estimate the curves were the general pore-solid fractal (PSF) model and two specific cases of the PSF model: the Tyler & Wheatcraft (TW) and the Rieu & Sposito (RS) models. The study was conducted on 30 undisturbed, sandy loam soil samples taken from a field and subjected to laboratory analysis. The fractal dimension, a non-variable scale factor characterizing each water retention model proposed, was estimated by direct scaling. The method for determining the fractal dimension proposed here entails limiting the analysis to the interval between an upper and lower pressure head cut-off on a log-log plot, and defining the dimension itself as the straight regression line that interpolates the points in the interval with the largest coefficient of determination, R2. The scale relative to the cut-off interval used to determine the fractal behaviour in each model used is presented. Furthermore, a second range of pressure head values was analysed to approximate the fractal dimension of the pore surface. The PSF model exhibited greater spatial variation than the TW or RS models for the parameter values typical of a sandy loam soil. An indication of the variability of the fractal dimension across the entire area studied is also provided. [source] NATURAL SELECTION ALONG AN ENVIRONMENTAL GRADIENT: A CLASSIC CLINE IN MOUSE PIGMENTATIONEVOLUTION, Issue 7 2008Lynne M. Mullen We revisited a classic study of morphological variation in the oldfield mouse (Peromyscus polionotus) to estimate the strength of selection acting on pigmentation patterns and to identify the underlying genes. We measured 215 specimens collected by Francis Sumner in the 1920s from eight populations across a 155-km, environmentally variable transect from the white sands of Florida's Gulf coast to the dark, loamy soil of southeastern Alabama. Like Sumner, we found significant variation among populations: mice inhabiting coastal sand dunes had larger feet, longer tails, and lighter pigmentation than inland populations. Most striking, all seven pigmentation traits examined showed a sharp decrease in reflectance about 55 km from the coast, with most of the phenotypic change occurring over less than 10 km. The largest change in soil reflectance occurred just south of this break in pigmentation. Geographic analysis of microsatellite markers shows little interpopulation differentiation, so the abrupt change in pigmentation is not associated with recent secondary contact or reduced gene flow between adjacent populations. Using these genetic data, we estimated that the strength of selection needed to maintain the observed distribution of pigment traits ranged from 0.0004 to 21%, depending on the trait and model used. We also examined changes in allele frequency of SNPs in two pigmentation genes, Mc1r and Agouti, and show that mutations in the cis -regulatory region of Agouti may contribute to this cline in pigmentation. The concordance between environmental variation and pigmentation in the face of high levels of interpopulation gene flow strongly implies that natural selection is maintaining a steep cline in pigmentation and the genes underlying it. [source] ADAPTIVE CONSTRAINTS AND THE PHYLOGENETIC COMPARATIVE METHOD: A COMPUTER SIMULATION TESTEVOLUTION, Issue 1 2002Emilia P. Martins Abstract Recently, the utility of modern phylogenetic comparative methods (PCMs) has been questioned because of the seemingly restrictive assumptions required by these methods. Although most comparative analyses involve traits thought to be undergoing natural or sexual selection, most PCMs require an assumption that the traits be evolving by less directed random processes, such as Brownian motion (BM). In this study, we use computer simulation to generate data under more realistic evolutionary scenarios and consider the statistical abilities of a variety of PCMs to estimate correlation coefficients from these data. We found that correlations estimated without taking phylogeny into account were often quite poor and never substantially better than those produced by the other tested methods. In contrast, most PCMs performed quite well even when their assumptions were violated. Felsenstein's independent contrasts (FIC) method gave the best performance in many cases, even when weak constraints had been acting throughout phenotypic evolution. When strong constraints acted in opposition to variance-generating (i.e., BM) forces, however, FIC correlation coefficients were biased in the direction of those BM forces. In most cases, all other PCMs tested (phylogenetic generalized least squares, phylogenetic mixed model, spatial autoregression, and phylogenetic eigenvector regression) yielded good statistical performance, regardless of the details of the evolutionary model used to generate the data. Actual parameter estimates given by different PCMs for each dataset, however, were occasionally very different from one another, suggesting that the choice among them should depend on the types of traits and evolutionary processes being considered. [source] Modelling the Influence of Age, Body Size and Sex on Maximum Oxygen Uptake in Older HumansEXPERIMENTAL PHYSIOLOGY, Issue 2 2000Patrick J. Johnson The purpose of this study was to describe the influence of body size and sex on the decline in maximum oxygen uptake (V,O2,max) in older men and women. A stratified random sample of 152 men and 146 women, aged 55-86 years, was drawn from the study population. Influence of age on V,O2,max, independent of differences in body mass (BM) or fat-free mass (FFM), was investigated using the following allometric model: V,O2,max= BMb (or FFMb) exp(a + (c × age) + (d × sex)) [epsilon]. The model was linearised and parameters identified using standard multiple regression. The BM model explained 68.8% of the variance in V,O2,max. The parameters (± s.e.e., standard error of the estimate) for lnBM (0.563 ± 0.070), age (-0.0154 ± 0.0012), sex (0.242 ± 0.024) and the intercept (-1.09 ± 0.32) were all significant (P < 0.001). The FFM model explained 69.3% of the variance in V,O2,max, and the parameters (± s.e.e) lnFFM (0.772 ± 0.090), age (-0.0159 ± 0.0012) and the intercept (-1.57 ± 0.36) were significant (P < 0.001), while sex (0.077 +/, 0.038) was significant at P = 0.0497. Regardless of the model used, the age-associated decline was similar, with a relative decline of 15% per decade (0.984 exp(age)) in V,O2,max in older humans being estimated. The study has demonstrated that, for a randomly drawn sample, the age-related loss in V,O2,max is determined, in part, by the loss of fat-free body mass. When this factor is accounted for, the loss of V,O2,max across age is similar in older men and women. [source] Silver (Ag+) reduces denitrification and induces enrichment of novel nirK genotypes in soilFEMS MICROBIOLOGY LETTERS, Issue 2 2007Ingela Noredal Throbäck Abstract The use of silver ions in industry to prevent microbial growth is increasing and silver is a new and an overlooked heavy-metal contaminant in sewage sludge-amended soil. The denitrifying community was the model used to assess the dose-dependent effects of silver ions on microorganisms overtime in soil microcosms. Silver caused a sigmoid dose-dependent reduction in denitrification activity, and no recovery was observed during 90 days. Dentrifiers with nirK, which encodes the copper nitrite reductase, were targeted to estimate abundance and community composition for some of the concentrations. The nirK copy number decreased by the highest addition (100 mg Ag kg,1 soil), but the nirK diversity increased. Treatment-specific sequences not clustering with any deposited nirK sequences were found, indicating that silver induces enrichment of novel nirK denitrifiers. [source] Assessment and compensation of inconsistent coupling conditions in point-receiver land seismic dataGEOPHYSICAL PROSPECTING, Issue 1 2007Claudio Bagaini ABSTRACT We introduce a method to detect and compensate for inconsistent coupling conditions that arise during onshore seismic data acquisitions. The reflected seismic signals, the surface waves, or the ambient-noise records can be used for the evaluation of the different coupling conditions of closely spaced geophones. We derive frequency-dependent correction operators using a parametric approach based upon a simple model of the interaction between geophone and soil. The redundancy of the measurements available permits verification of the assumptions made on the input signals in order to derive the method and to assess the validity of the model used. The method requires point-receiver data in which the signals recorded by the individual geophones are digitized. We have verified the accuracy of the method by applying it to multicomponent ambient-noise records acquired during a field experiment in which the coupling conditions were controlled and modified during different phases of the experiment. We also applied the method to field data, which were acquired without the coupling conditions being controlled, and found that only a few geophones showed an anomalous behaviour. It was also found that the length of the noise records routinely acquired during commercial surveys is too short to provide enough statistics for the application of our method. [source] Euler deconvolution of the analytic signal and its application to magnetic interpretationGEOPHYSICAL PROSPECTING, Issue 3 2004P. Keating ABSTRACT Euler deconvolution and the analytic signal are both used for semi-automatic interpretation of magnetic data. They are used mostly to delineate contacts and obtain rapid source depth estimates. For Euler deconvolution, the quality of the depth estimation depends mainly on the choice of the proper structural index, which is a function of the geometry of the causative bodies. Euler deconvolution applies only to functions that are homogeneous. This is the case for the magnetic field due to contacts, thin dikes and poles. Fortunately, many complex geological structures can be approximated by these simple geometries. In practice, the Euler equation is also solved for a background regional field. For the analytic signal, the model used is generally a contact, although other models, such as a thin dike, can be considered. It can be shown that if a function is homogeneous, its analytic signal is also homogeneous. Deconvolution of the analytic signal is then equivalent to Euler deconvolution of the magnetic field with a background field. However, computation of the analytic signal effectively removes the background field from the data. Consequently, it is possible to solve for both the source location and structural index. Once these parameters are determined, the local dip and the susceptibility contrast can be determined from relationships between the analytic signal and the orthogonal gradients of the magnetic field. The major advantage of this technique is that it allows the automatic identification of the type of source. Implementation of this approach is demonstrated for recent high-resolution survey data from an Archean granite-greenstone terrane in northern Ontario, Canada. [source] Modelling canopy CO2 fluxes: are ,big-leaf' simplifications justified?GLOBAL ECOLOGY, Issue 6 2001A. D. Friend Abstract 1The ,big-leaf' approach to calculating the carbon balance of plant canopies is evaluated for inclusion in the ETEMA model framework. This approach assumes that canopy carbon fluxes have the same relative responses to the environment as any single leaf, and that the scaling from leaf to canopy is therefore linear. 2A series of model simulations was performed with two models of leaf photosynthesis, three distributions of canopy nitrogen, and two levels of canopy radiation detail. Leaf- and canopy-level responses to light and nitrogen, both as instantaneous rates and daily integrals, are presented. 3Observed leaf nitrogen contents of unshaded leaves are over 40% lower than the big-leaf approach requires. Scaling from these leaves to the canopy using the big-leaf approach may underestimate canopy photosynthesis by ~20%. A leaf photosynthesis model that treats within-leaf light extinction displays characteristics that contradict the big-leaf theory. Observed distributions of canopy nitrogen are closer to those required to optimize this model than the homogeneous model used in the big-leaf approach. 4It is theoretically consistent to use the big-leaf approach with the homogeneous photosynthesis model to estimate canopy carbon fluxes if canopy nitrogen and leaf area are known and if the distribution of nitrogen is assumed optimal. However, real nitrogen profiles are not optimal for this photosynthesis model, and caution is necessary in using the big-leaf approach to scale satellite estimates of leaf physiology to canopies. Accurate prediction of canopy carbon fluxes requires canopy nitrogen, leaf area, declining nitrogen with canopy depth, the heterogeneous model of leaf photosynthesis and the separation of sunlit and shaded leaves. The exact nitrogen profile is not critical, but realistic distributions can be predicted using a simple model of canopy nitrogen allocation. [source] Spatially distributed observations in constraining inundation modelling uncertaintiesHYDROLOGICAL PROCESSES, Issue 16 2005Micha Werner Abstract The performance of two modelling approaches for predicting floodplain inundation is tested using observed flood extent and 26 distributed floodplain level observations for the 1997 flood event in the town of Usti nad Orlici in the Czech Republic. Although the one-dimensional hydrodynamic model and the integrated one- and two-dimensional model are shown to perform comparably against the flood extent data, the latter shows better performance against the distributed level observations. Comparable performance in predicting the extent of inundation is found to be primarily as a result of the urban reach considered, with flood extent constrained by road and railway embankments. Uncertainty in the elevation model used in both approaches is shown to have little effect on the reliability in predicting flood extent, with a greater impact on the ability in predicting the distributed level observations. These results show that reliability of flood inundation modelling in urban reaches, where flood risk assessment is of more interest than in more rural reaches, can be improved greatly if distributed observations of levels in the floodplain are used in constraining model uncertainties. Copyright © 2005 John Wiley & Sons, Ltd. [source] Evaluating explicit and implicit routing for watershed hydro-ecological models of forest hydrology at the small catchment scaleHYDROLOGICAL PROCESSES, Issue 8 2001C. L. Tague Abstract This paper explores the behaviour and sensitivity of a watershed model used for simulating lateral soil water redistribution and runoff production. In applications such as modelling the effects of land-use change in small headwater catchments, interactions between soil moisture, runoff and ecological processes are important. Because climate, soil and canopy characteristics are spatially variable, both the pattern of soil moisture and the associated outflow must be represented in modelling these processes. This study compares implicit and explicit routing approaches to modelling the evolution of soil moisture pattern and spatially variable runoff production. It also addresses the implications of using different landscape partitioning strategies. This study presents the results of calibration and application of these different routing and landscape partitioning approaches on a 60 ha forested watershed in Western Oregon. For comparison, the different approaches are incorporated into a physically based hydro-ecological model, RHESSys, and the resulting simulated soil moisture, runoff production and sensitivity to unbiased error are examined. Results illustrate that both routing approaches can be calibrated to achieve a reasonable fit between observed and modelled outflow. Calibrated values for effective watershed hydraulic conductivity are higher for the explicit routing approach, which illustrates differences between the two routing approaches in their representation of internal watershed dynamics. The explicit approach illustrates a seasonal shift in drainage organization from watershed to more local control as climate goes from a winter wet to a summer dry period. Assumptions used in the implicit approach maintain the same pattern of drainage organization throughout the season. The implicit approach is also more sensitive to random error in soil and topographic input information, particularly during wetter periods. Comparison between the two routing approaches illustrates the advantage of the explicit routing approach, although the loss of computational efficiency associated with the explicit routing approach is noted. To compare different strategies for partitioning the landscape, the use of a non-grid-based method of partitioning is introduced and shown to be comparable to grid-based partitioning in terms of simulated soil moisture and runoff production. Copyright © 2001 John Wiley & Sons, Ltd. [source] Improving Parsing of ,BA' Sentences for Machine TranslationIEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 1 2008Dapeng Yin Non-member Abstract The research on Chinese-Japanese machine translation has been lasting for many years, and now this research field is increasingly thoroughly refined. In practical machine translation system, the processing of a simple and short Chinese sentence has somewhat good results. However, the translation of complex long Chinese sentence still has difficulties. For example, these systems are still unable to solve the translation problem of complex ,BA' sentences. In this article a new method of parsing of ,BA' sentence for machine translation based on valency theory is proposed. A ,BA' sentence is one that has a prepositional word ,BA'. The structural character of a ,BA' sentence is that the original verb is behind the object word. The object word after the ,BA' preposition is used as an adverbial modifier of an active word. First, a large number of grammar items from Chinese grammar books are collected, and some elementary judgment rules are set by classifying and including the collected grammar items. Then, these judgment rules are put into use in actual Chinese language and are modified by checking their results instantly. Rules are checked and modified by using the statistical information from an actual corpus. Then, a five-segment model used for ,BA' sentence translation is brought forward after the above mentioned analysis. Finally, we applied this proposed model into our developed machine translation system and evaluated the experimental results. It achieved a 91.3% rate of accuracy and the satisfying result verified effectiveness of our five-segment model for ,BA' sentence translation. Copyright © 2007 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source] Micromechanical viscoelasto-plastic models and finite element implementation for rate-independent and rate-dependent permanent deformation of stone-based materialsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2010Qingli Dai Abstract This paper presents parallel and serial viscoelasto-plastic models to simulate the rate-independent and the rate-dependent permanent deformation of stone-based materials, respectively. The generalized Maxwell viscoelastic and Chaboche's plastic models were employed to formulate the proposed parallel and serial viscoelasto-plastic constitutive laws. The finite element (FE) implementation of the parallel model used a displacement-based incremental formulation for the viscoelastic part and an elastic predictor,plastic corrector scheme for the elastoplastic component. The FE framework of the serial viscoelasto-plastic model employed a viscoelastic predictor,plastic corrector algorithm. The stone-based materials are consisted of irregular aggregates, matrix and air voids. This study used asphalt mixtures as an example. A digital sample was generated with imaging analysis from an optically scanned surface image of an asphalt mixture specimen. The modeling scheme employed continuum elements to mesh the effective matrix, and rigid bodies for aggregates. The ABAQUS user material subroutines defined with the proposed viscoelasto-plastic matrix models were employed. The micromechanical FE simulations were conducted on the digital mixture sample with the viscoelasto-plastic matrix models. The simulation results showed that the serial viscoelasto-plastic matrix model generated more permanent deformation than the parallel one by using the identical material parameters and displacement loadings. The effect of loading rates on the material viscoelastic and viscoelasto-plastic mixture behaviors was investigated. Permanent deformations under cyclic loadings were determined with FE simulations. The comparison studies showed that the simulation results correctly predicted the rate-independent and rate-dependent viscoelasto-plastic constitutive properties of the proposed matrix models. Overall, these studies indicated that the developed micromechanical FE models have the abilities to predict the global viscoelasto-plastic behaviors of the stone-based materials. Copyright © 2009 John Wiley & Sons, Ltd. [source] A robust methodology for RANS simulations of highly underexpanded jetsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2008G. Lehnasch Abstract This work aims at developing/combining numerical tools adapted to the simulation of the near field of highly underexpanded jets. An overview of the challenging numerical problems related to the complex shock/expansion structure encountered in these flows is given and an efficient and low-cost numerical strategy is proposed to overcome these, even on short computational domains. Based on common upwinding algorithms used on unstructured meshes in a mixed finite-volume/finite-element approach, it relies on an appropriate utilization of zonal anisotropic remeshing algorithms. This methodology is validated for the whole near field of cold air jets issuing from axisymmetric convergent nozzles and yielding various underexpansion ratios. In addition, the most usual corrections of the k,, model used to take into account the compressibility effects on turbulence are precisely assessed. Copyright © 2007 John Wiley & Sons, Ltd. [source] Assessment of two-equation turbulence modelling for high Reynolds number hydrofoil flowsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2004N. Mulvany Abstract This paper presents an evaluation of the capability of turbulence models available in the commercial CFD code FLUENT 6.0 for their application to hydrofoil turbulent boundary layer separation flow at high Reynolds numbers. Four widely applied two-equation RANS turbulence models were assessed through comparison with experimental data at Reynolds numbers of 8.284×106 and 1.657×107. They were the standard k,,model, the realizable k,,model, the standard k,,model and the shear-stress-transport (SST) k,,model. It has found that the realizable k,,turbulence model used with enhanced wall functions and near-wall modelling techniques, consistently provides superior performance in predicting the flow characteristics around the hydrofoil. Copyright © 2004 John Wiley & Sons, Ltd. [source] The Market for Professional Services in IndonesiaINTERNATIONAL JOURNAL OF AUDITING, Issue 2 2004Ilias G. Basioudis This paper reports the results of a study which investigates the market for professional services in Indonesia, a country which has not been investigated in the by audit fee literature prior. A well-developed research model used in the prior literature has also been applied in this study, and the empirical findings suggest broad similarities in the pricing of professional services in Indonesia and other countries previously studied. In addition to extending the results of prior research to a country not previously studied, this paper examines whether the large auditors fee premium documented in other countries exists in Indonesia, especially after the major Asian financial crisis of 1997/98, since then almost all companies in this geographical area exercise tight budget controls. The results suggest that no audit fee premium is accrued to Indonesian Big 5 auditors, in contrast to the large audit firm fee premium documented in many other countries. [source] Women's responses to fashion media images: a study of female consumers aged 30,59INTERNATIONAL JOURNAL OF CONSUMER STUDIES, Issue 3 2010Joy M. Kozar Abstract The purpose of this study was to examine whether female consumers ranging in age from 30 to 59 prefer fashion advertising models more closely resembling their age. The sample for this study consisted of 182 women. Stimuli included full-colored photographs of current fashion models. A questionnaire designed to explore participants' responses to the stimuli included scales measuring participants' beliefs about the stimulus models' appearances and attractiveness, participants' purchase intentions and perceived similarity with the models and participants' perceived fashionability of the model's clothing. Participants rated models appearing older in age significantly higher than younger models on the characteristics related to appearance and attractiveness. Advertisements with older models also had a significant positive relationship to participants' purchase intentions as compared to younger-age models. Participants who perceived more similarity to the models were found to have more positive beliefs about the model's appearance and attractiveness and the fashionability of the model's clothing. Perceived similarity also had a significant positive relationship to participants' purchase intentions. As a result of this study, findings suggest that marketers and retailers should consider the age of the model used in their promotional materials. Specifically, it is possible that female consumers either transitioning into, or currently in, the middle adulthood life stages may have a preference for fashion models more closely resembling their age group. [source] On the effect of the local turbulence scales on the mixing rate of diffusion flames: assessment of two different combustion modelsINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 10 2002Jose Lopes Abstract A mathematical model for the prediction of the turbulent flow, diffusion combustion process, heat transfer including thermal radiation and pollutants formation inside combustion chambers is described. In order to validate the model the results are compared herein against experimental data available in the open literature. The model comprises differential transport equations governing the above-mentioned phenomena, resulting from the mathematical and physical modelling, which are solved by the control volume formulation technique. The results yielded by the two different turbulent-mixing physical models used for combustion, the simple chemical reacting system (SCRS) and the eddy break-up (EBU), are analysed so that the need to make recourse to local turbulent scales to evaluate the reactants' mixing rate is assessed. Predictions are performed for a gaseous-fuelled combustor fired with two different burners that induce different aerodynamic conditions inside the combustion chamber. One of the burners has a typical geometry of that used in gaseous fired boilers,fuel firing in the centre surrounded by concentric oxidant firing,while the other burner introduces the air into the combustor through two different swirling concentric streams. Generally, the results exhibit a good agreement with the experimental values. Also, NO predictions are performed by a prompt-NO formation model used as a post-processor together with a thermal-NO formation model, the results being generally in good agreement with the experimental values. The predictions revealed that the mixture between the reactants occurred very close to the burner and almost instantaneously, that is, immediately after the fuel-containing eddies came into contact with the oxidant-containing eddies. As a result, away from the burner, the SCRS model, that assumes an infinitely fast mixing rate, appeared to be as accurate as the EBU model for the present predictions. Closer to the burner, the EBU model, that establishes the reactants mixing rate as a function of the local turbulent scales, yielded slightly slower rates of mixture, the fuel and oxidant concentrations which are slightly higher than those obtained with the SCRS model. As a consequence, the NO concentration predictions with the EBU combustion model are generally higher than those obtained with the SCRS model. This is due to the existence of higher concentrations of fuel and oxygen closer to the burner when predictions were performed taking into account the local turbulent scales in the mixing process of the reactants. The SCRS, being faster and as accurate as the EBU model in the predictions of combustion properties appears to be more appropriate. However, should NO be a variable that is predicted, then the EBU model becomes more appropriate. This is due to the better results of oxygen concentration yielded by that model, since it solves a transport equation for the oxidant concentration, which plays a dominant role in the prompt-NO formation rate. Copyright © 2002 John Wiley & Sons, Ltd. [source] Tracking of multiple target types with a single neural extended Kalman filterINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2010Kathleen A. Kramer The neural extended Kalman filter is an adaptive state estimation routine that can be used in target-tracking systems to aid in the tracking through maneuvers without prior knowledge of the targets' dynamics. Within the neural extended Kalman filter, a neural network is trained using a Kalman filter training paradigm that is driven by the same residual as the state estimator. The difference between the a priori model used in the prediction steps of the estimator and the actual target dynamics is approximated. An important benefit of the technique is its versatility because little if any a priori knowledge of the target dynamics is needed. This allows the technique to be used in a generic tracking system that will encounter various classes of targets. In this paper, the neural extended Kalman filter is applied simultaneously to three separate classes of targets, each with different maneuver capabilities. The results show that the approach is well suited for use within a tracking system with multiple possible or unknown target characteristics. © 2010 Wiley Periodicals, Inc. [source] Estimating causal effects from observational data with a model for multiple biasINTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 2 2007Michael Höfler Abstract Conventional analyses of observational data may be biased due to confounding, sampling and measurement, and may yield interval estimates that are much too narrow because they do not take into account uncertainty about unknown bias parameters, such as misclassification probabilities. We used a simple, multiple bias adjustment method to estimate the causal effect of social anxiety disorder (SAD) on subsequent depression. A Monte Carlo sensitivity analysis was applied to data from the Early Developmental Stages of Psychiatry (EDSP) study, and bias due to confounding, sampling and measurement was modelled. With conventional logistic regression analysis, the risk for depression was elevated in the presence of SAD only in the older cohort (age 17,24 years at baseline assessment); odds ratio (OR) = 3.06, 95% confidence interval (CI) 1.64,5.70, adjusted for sex and age. The bias-adjusted estimate was 2.01 with interval limits of 0.61 and 9.71. Thus, given the data and the bias model used, there was considerably more uncertainty about the real effect, but the probability that SAD increases the risk for subsequent depression (OR > 1) was 88.6% anyway. Multiple bias modelling, if properly used, reveals the necessity for a better understanding of bias, suggesting a need to conduct larger and more adequate validation studies on instruments that are used to diagnose mental disorders. Copyright © 2007 John Wiley & Sons, Ltd. [source] Scheduling streaming flows on the downlink shared channel of UMTSINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 2 2007Joy Kuri In Universal Mobile Telecommunication Systems (UMTS), the Downlink Shared Channel (DSCH) may be used to provide streaming services. The traffic model for streaming services is different from the continuously backlogged model used in much of the literature. Each connection specifies a required service rate over an interval of time. In this paper, we are interested in determining how k DSCH frames should be allocated among a set of I connections. We need a scheduler that is channel-aware, so that channels presently enjoying low fading losses can be exploited to achieve higher aggregate throughput. On the other hand, the scheduler is also required to be fair, so that each connection obtains a throughput as close as possible to what it requires. We introduce the notion of discrepancy to capture the inherent trade-off between aggregate throughput and fairness. We show that the discrepancy criterion provides a flexible means for balancing efficiency, as measured by aggregate throughput, and fairness. Our problem, then, is to schedule mobiles so as to minimize the discrepancy over the control horizon. We provide a simple low-complexity heuristic called ITEM that is provably optimal in certain cases. In particular, we show that ITEM is optimal when applied in the UMTS context. Next, we compare the performance of ITEM with that of other algorithms, and show that it performs better in terms of both fairness and aggregate throughput. Thus, ITEM provides benefits in both dimensions,fairness and efficiency,and is therefore a promising algorithm for scheduling streaming connections. Copyright © 2007 John Wiley & Sons, Ltd. [source] |