Home About us Contact | |||
Model Approach (model + approach)
Kinds of Model Approach Selected AbstractsMEASURING MONETARY POLICY IN THE UK: A FACTOR-AUGMENTED VECTOR AUTOREGRESSION MODEL APPROACHTHE MANCHESTER SCHOOL, Issue 2005GIANLUCA LAGANÀ This paper investigates the determinants of UK interest rates using a factor-augmented vector autoregression model (VAR), similar to the one suggested by Bernanke, Boivin and Eliasz (Quarterly Journal of Economics, Vol. 120 (2005), No. 1, pp. 387,422). The method allows impulse response functions to be generated for all the variables in the data set and so is able to provide a more complete description of UK monetary policy than is possible using standard VARs. The results show that the addition of factors to a benchmark VAR generates a more reasonable characterization of the effects of unexpected increases in the interest rate and, in particular, gets rid of a ,price puzzle' response present in the benchmark VAR. The extra information generated by this method, however, also brings to light other identification issues, notably house price and stock market ,puzzles'. Importantly the out-of-sample prediction performance of the factor-augmented VARs is also very good and strongly superior to those of the benchmark VAR and simple autoregression models. [source] A Constructive Graphical Model Approach for Knowledge-Based Systems: A Vehicle Monitoring Case StudyCOMPUTATIONAL INTELLIGENCE, Issue 3 2003Y. Xiang Graphical models have been widely applied to uncertain reasoning in knowledge-based systems. For many of the problems tackled, a single graphical model is constructed before individual cases are presented and the model is used to reason about each new case. In this work, we consider a class of problems whose solution requires inference over a very large number of models that are impractical to construct a priori. We conduct a case study in the domain of vehicle monitoring and then generalize the approach taken. We show that the previously held negative belief on the applicability of graphical models to such problems is unjustified. We propose a set of techniques based on domain decomposition, model separation, model approximation, model compilation, and re-analysis to meet the computational challenges imposed by the combinatorial explosion. Experimental results on vehicle monitoring demonstrated good performance at near-real-time. [source] An Analysis of the Time to Adoption of Local Sales Taxes: A Duration Model ApproachPUBLIC BUDGETING AND FINANCE, Issue 1 2007DAVID L. SJOQUIST The timing of the decision of local governments to adopt a local sales tax is explored in a duration model with time-varying covariates. Our framework suggests a set of factors associated with the decision to adopt a local sales tax and we find empirical support for these factors. We also consider whether the adoption by one jurisdiction depends on the adoption by neighboring jurisdictions and find empirical support for interdependency of behavior. [source] Explaining Stock Market Correlation: A Gravity Model ApproachTHE MANCHESTER SCHOOL, Issue S1 2002Thomas J Flavin A gravity model, frequently used to explain trade patterns, is used to explain stock market correlations. The main result of the trade literature is that geography matters for goods markets. Physical location and trading costs should be less of an issue in asset markets. However, we find that geographical variables still matter when examining equity market linkages. In particular, the number of overlapping opening hours and sharing a common border tends to increase cross,country stock market correlation. These results may stem from asymmetrical information and investor sentiment, lending some empirical support for these explanations of the international diversification puzzle. [source] Testing For Government Intertemporal Solvency: A Smooth Transition Error Correction Model ApproachTHE MANCHESTER SCHOOL, Issue 6 2001Andrea Cipollini Applied macroeconomists have tested for the government intertemporal solvency condition by either testing for linear stationarity in the total government deficit series or testing for linear cointegration between total government spending and total tax revenues. A number of authors have focused, in particular, on structural breaks in the government deficit process. In this paper, we use a smooth transition error correction model to test and estimate a shift in the adjustment toward a linear cointegration relationship between the government spending to output ratio and the total tax revenues to output ratio. Estimation results show that government authorities react only to large (in absolute value) changes in the government spending to output ratio. Residual diagnostic tests are provided and they show that the model is not misspecified. [source] Hydrodynamic Cell Model: General Formulation and Comparative Analysis of Different ApproachesTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2007Emiliy K. Zholkovskiy Abstract This paper is concerned with the Cell Model method of addressing hydrodynamic flow through system of solid particles. The starting point of the analysis is the general problem formulation intended for describing a pressure driven flow through a diaphragm which can be considered as a set of representative cells having arbitrary shape and containing any number of particles. Using the general problem formulation, the hydrodynamic field inside an individual representative cell is interrelated with the applied pressure difference and the external flow velocity. To this end, four relationships containing integrals over the outer boundary of a representative cell are derived in the paper. Assuming that the representative cell is a sphere containing a single particle in the centre, the derived general relationships are transformed into outer cell boundary conditions employed in the literature by different authors. The general number of the obtained outer boundary conditions is more than the required number. Accordingly, by choosing different sets of the outer boundary conditions, different models are considered and compared with each other and with the results obtained by others for regular particle arrays. The common and different features of the hydrodynamic and electrodynamic versions of the Cell Model approaches are analyzed. Finally, it is discussed which version of the cell model gives the best approximation while describing pressure and electrically driven flows through a diaphragm and sedimentation of particles. On s'intéresse dans cet article à la méthode du Modèle de Cellules pour traiter l'écoulement à travers un système de particules solides. Le point de départ de l'analyse consiste à formuler le problème général dans le but de décrire un écoulement sous pression dans un diaphragme qui peut être considéré comme un ensemble de cellules représentatives de forme arbitraire et contenant un nombre quelconque de particules. À l'aide de cette formulation générale du problème, l'hydrodynamique dans une cellule représentative donnée est reliée à la différence de pression appliquée et à la vitesse d'écoulement externe. À cette fin, quatre relations contenant des intégrales sur la frontière d'une cellule représentative sont établies dans cette étude. Si l'on suppose que la cellule représentative est une sphère contenant une particule unique en son centre, les relations générales calculées peuvent être transformées en conditions à la frontière des cellules semblables à celles employées dans la littérature scientifique par différents auteurs. Le nombre général de conditions limites obtenues dépasse le nombre requis. Par conséquent, en choisissant différents ensembles de conditions limites, différents modèles sont considérés et comparés entre eux ainsi qu'avec les résultats obtenus pour des arrangements réguliers de particules. Les caractéristiques des versions hydrodynamiques et électrodynamiques des approches du Modèle de Cellules sont analysées. Finalement, on examine quelle version de modèle de cellule donne la meilleure approximation des écoulements sous pression et des écoulements électrodynamiques à travers un diaphragme et pour la sédimentation des particules. [source] Experimental Investigation and Modeling Approach of the Phenylacetonitrile Alkylation Process in a MicroreactorCHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 6 2009E. S. Borovinskaya Abstract The application of microreaction technology has the potential to intensify chemical processes. It is therefore of great interest to investigate the operating efficiency of a multiphase process such as the alkylation of phenylacetonitrile in a microreactor and to compare the performance to a batch reactor. The undeniable advantages of continuous microreactor systems for this process were demonstrated. Furthermore, the influence of the organic to aqueous phase ratio in the microreactor was investigated. A model of the reaction course was formulated based on experimental data. This model was used in the analysis and modeling of the alkylation process in a microreactor and found to be adequate. The optimal microreactor performance conditions were determined using the numerical optimization technique (Harrington's desirability function) and confirmed by experiments. [source] A Comparison of Modeling Approaches for Dispersion Homopolymerization of MMA and Styrene in Supercritical CO2MACROMOLECULAR REACTION ENGINEERING, Issue 4 2008Iraís A. Quintero-Ortega Abstract A comparison of kinetic models for dispersion polymerization of MMA and styrene in supercritical CO2 is presented. The limiting case of solution polymerization, as a simplified case, was also addressed. Calculation of the partition of components between the continuous and dispersed phases was emphasized. Experimental data for the solution and dispersion polymerizations of styrene and MMA, using different types of stabilizers, were used to guide the study. Although all the models analyzed can be considered as "adequate" in representing the behavior of the system, some of their strengths and drawbacks have been highlighted. [source] Invasive exotic aoudad (Ammotragus lervia) as a major threat to native Iberian ibex (Capra pyrenaica): a habitat suitability model approachDIVERSITY AND DISTRIBUTIONS, Issue 5 2007Pelayo Acevedo ABSTRACT The introduction of alien species to new environments is one of the main threats to the conservation of biodiversity. One particularly problematic example is that of wild ungulates which are increasingly being established in regions outside their natural distribution range due to human hunting interests. Unfortunately, we know little of the effects these large herbivores may have on the host ecosystems. This study deals with a first comparative analysis of the habitat requirements of two ungulate species that may be facing competition for resources in the south of Europe: the native Iberian ibex (Capra pyrenaica) and the exotic aoudad (Ammotragus lervia). The aoudad is a North African caprid introduced in 1970 as a game species in south-eastern Spain. It has adapted well, and populations have been freely expanding since then. Ecological Niche Factor Analysis is used to describe the realized niche of both species where their distribution ranges merge. Both species occupy marginal areas of rugged terrain in the region. Marginality is higher for the Iberian ibex, which also presents a higher tolerance of secondary environmental gradients than the aoudad. Highly suitable areas for each species are secondarily suitable for the other. Reclassified and cross-tabulated habitat suitability maps showing the areas of potential spatial coexistence and differences in ecological traits between both species are provided. The results obtained do not allow inferring resource competition between these species. However, current aoudad expansion could result in it invading the favoured habitats of the ibex. Inadequate hunting policy and monitoring, and increasing climatic resemblance of the study region to the native aoudad areas, due to a strong desertification process, are facilitating a high rate of expansion. We strongly recommend to eradicate or, at least, monitor these exotic populations, and promote active conservation practices, if one wants to preserve the unique natural resources present in this European region. [source] Habitat size and number in multi-habitat landscapes: a model approach based on species-area curvesECOGRAPHY, Issue 1 2002Even Tjørve This paper discusses species diversity in simple multi-habitat environments. Its main purpose is to present simple mathematical and graphical models on how landscape patterns affect species numbers. The idea is to build models of species diversity in multi-habitat landscapes by combining species-area curves for different habitats. Predictions are made about how variables such as species richness and species overlap between habitats influence the proportion of the total landscape each habitat should constitute, and how many habitats it should be divided into in order to be able to sustain the maximal number of species. Habitat size and numbers are the only factors discussed here, not habitat spatial patterns. Among the predictions are: 1) where there are differences in species diversity between habitats, optimal landscape patterns contain larger proportions of species rich habitats. 2) Species overlap between habitats shifts the optimum further towards larger proportions of species rich habitat types. 3) Species overlap also shifts the optimum towards fewer habitat types. 4) Species diversity in landscapes with large species overlap is more resistant to changes in landscape (or reserve) size. This type of model approach can produce theories useful to nature and landscape management in general, and the design of nature reserves and national parks in particular. [source] Problematic heroin use incidence trends in SpainADDICTION, Issue 2 2009Albert Sánchez-Niubò ABSTRACT Aims To estimate the annual incidence of heroin use in Spain. Participants and design Data on individuals' year of first heroin use (from 1971 to 2005), year of first heroin treatment between 1991 and 2005 and most frequent route of heroin administration when presenting to treatment were obtained from the Spanish Drug Observatory Register and used to calculate the delay between onset and treatment. By using a log-linear model approach it was possible to correct for missing observations (heroin users who presented for treatment before 1991 and those who had still not presented by the end of 2005) and to estimate heroin incidence over time. Findings The estimated incidence of problematic heroin use in the population aged 15,44 peaked at 190 per 100 000 in 1980,after rising rapidly from less than 40 per 100 000 in 1971,and fell subsequently to about 8 per 100 000 in 2005. On average, incidence was five times higher in men. Injecting heroin incidence peaked and declined rapidly from 1980; as heroin smoking did not decline as rapidly, from 1985 onwards its estimated incidence has remained above that of heroin injecting. The delay between starting heroin use and entering treatment had a median of 3 years. Conclusions We demonstrate the utility of a method to estimate heroin incidence from analysis of observed trends in presentations at specialist drug treatment facilities. The estimates suggest that incidence of heroin use, especially injecting, has fallen since 1980 and is now lower than in the early 1970s. [source] Water-extractability, free ion activity, and pH explain cadmium sorption and toxicity to Folsomia candida (Collembola) in seven soil-pH combinationsENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 8 2004Cornelis A. M. van Gestel Abstract Toxicity of cadmium to Folsomia candida was determined in soils at different pHs (3.5, 5.0, and 6.5). The Langmuir sorption constant (KL), based on pore-water or water-extractable concentrations, showed a pH-related increase of cadmium sorption that was most pronounced when using free Cd2+ ion activities ({Cd2+}s). Two-species Langmuir isotherms that used total cadmium concentration ([Cd]) or {Cd2+} and pH in the water-extractable fractions gave the best description of cadmium sorption on all soils together. Cadmium concentrations causing 50% reduction of growth and reproduction (median effective concentrations [EC50s]) differed by a factor of 4.5 to 20 when based on total soil concentrations and increased with increasing pH. However, when based on water-extractable or pore-water [Cd] or {Cd2+}, EC50s decreased with increasing pH, but differences between soils were still a factor of 4.5 to 32. The EC50s differed by less than a factor of 2.2 when based on body [Cd] in the surviving animals. Two-species Langmuir isotherms were used to relate body [Cd] in survivors to {Cd2+}, corrected for pH in water-extractable or pore-water fractions. An excellent description of effects on growth and reproduction was found when related to the body concentrations predicted in this way; the difference in EC50s between soils was reduced to a factor <2. This demonstrates that F. candida is mainly exposed to cadmium through the soil solution, and suggests that principles of a biotic ligand model approach may be applicable for this soil organism. [source] Adjusting for mortality effects in chronic toxicity testing: Mixture model approachENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2000Shin Cheng David Wang Abstract Chronic toxicity tests, such as the Ceriodaphnia dubia 7-d test are typically analyzed using standard statistical methods such as analysis of variance or regression. Recent research has emphasized the use of Poisson regression or more generalized regression for the analysis of the fecundity data from these studies. A possible problem in using standard statistical techniques is that mortality may occur from toxicant effects as well as reduced fecundity. A mixture model that accounts for fecundity and mortality is proposed for the analysis of data arising from these studies. Inferences about key parameters in the model are discussed. A joint estimate of the inhibition concentration is proposed based on the model. Confidence interval estimation via the bootstrap method is discussed. An example is given for a study involving copper and mercury. [source] Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemicHEALTH ECONOMICS, Issue 5 2010Anna K. Lugnér Abstract Conventional (static) models used in health economics implicitly assume that the probability of disease exposure is constant over time and unaffected by interventions. For transmissible infectious diseases this is not realistic and another class of models is required, so-called dynamic models. This study aims to examine the differences between one dynamic and one static model, estimating the effects of therapeutic treatment with antiviral (AV) drugs during an influenza pandemic in the Netherlands. Specifically, we focus on the sensitivity of the cost-effectiveness ratios to model choice, to the assumed drug coverage, and to the value of several epidemiological factors. Therapeutic use of AV-drugs is cost-effective compared with non-intervention, irrespective of which model approach is chosen. The findings further show that: (1) the cost-effectiveness ratio according to the static model is insensitive to the size of a pandemic, whereas the ratio according to the dynamic model increases with the size of a pandemic; (2) according to the dynamic model, the cost per infection and the life-years gained per treatment are not constant but depend on the proportion of cases that are treated; and (3) the age-specific clinical attack rates affect the sensitivity of cost-effectiveness ratio to model choice. Copyright © 2009 John Wiley & Sons, Ltd. [source] Demand for traditional medicine in Taiwan: a mixed Gaussian,Poisson model approachHEALTH ECONOMICS, Issue 3 2001Steven T. Yen Abstract Hurdle count models are used to examine the participation and consumption decisions in Chinese medicine use. Motivated by a household production model, a second censoring mechanism is introduced into existing single-hurdle models, and the resulting specification accommodates conscientious abstainers, as well as economic non-consumers, and admits excessive zeros in the sample. In contrast to previous studies that found few predictors, empirical results based on a Taiwanese national sample suggest that Western medicine is a gross substitute to Chinese medicine, and both time price and money price play more important roles than income. Insurance, lifestyle and demographics also determine the use of Chinese medicine. Copyright © 2001 John Wiley & Sons, Ltd. [source] Cover Picture: Hierarchically Organized Superstructure Emerging from the Exquisite Association of Inorganic Crystals, Organic Polymers, and Dyes: A Model Approach Towards Suprabiomineral Materials (Adv. Funct.ADVANCED FUNCTIONAL MATERIALS, Issue 9 2005Mater. Abstract Suprabiomineral materials possessing hierarchically organized superstructures are investigated by Imai and Oaki on p.,1407. Inorganic crystals, organic polymers, and functional dyes have assembled via a simple biomimetic route into a superstructure that contains six different tiers, from the macroscale to the nanoscale. The hierarchy originates from the strong interaction between crystals and polymers and the diffusion-controlled conditions. The versatile role of the polymer is found to be essential for the construction of a superstructure. This approach promises to generate novel types of functional materials with controllable structures and properties. We report a novel hierarchically organized superstructure emerging from an exquisite association of inorganic crystals, organic polymers, and dyes. The resultant K2SO4/poly(acrylic acid) composite includes five different tiers from the nanoscopic to the macroscopic. An additional new tier leading to functionality is formed by the incorporation of organic dyes that are organized in a nanospace. The emergent superstructure and properties are designed through changes in polymer concentration. The multiple roles of the polymer realize the generation of the architecture at each size scale. This model approach should be widely applicable to other systems, allowing for the preparation of innovative materials by an appropriate combination of crystals, polymers, and functional molecules. [source] Alcohol intoxication effects on visual perception: An fMRI studyHUMAN BRAIN MAPPING, Issue 1 2004Vince D. Calhoun Abstract We examined the effects of two doses of alcohol (EtOH) on functional magnetic resonance imaging (fMRI) activation during a visual perception task. The Motor-Free Visual Perception Test,Revised (MVPT-R) provides measures of overall visual perceptual processing ability. It incorporates different cognitive elements including visual discrimination, spatial relationships, and mental rotation. We used the MVPT-R to study brain activation patterns in healthy controls (1) sober, and (2) at two doses of alcohol intoxication with event-related fMRI. The fMRI data were analyzed using a general linear model approach based upon a model of the time course and a hemodynamic response estimate. Additionally, a correlation analysis was performed to examine dose-dependent amplitude changes. With regard to alcohol-free task-related brain activation, we replicate our previous finding in which SPM group analysis revealed robust activation in visual and visual association areas, frontal eye field (FEF)/dorsolateral prefrontal cortex (DLPFC), and the supplemental motor area (SMA). Consistent with a previous study of EtOH and visual stimulation, EtOH resulted in a dose-dependent decrease in activation amplitude over much of the visual perception network and in a decrease in the maximum contrast-to-noise ratio (in the lingual gyrus). Despite only modest behavior changes (in the expected direction), significant dose-dependent activation increases were observed in insula, DLPFC, and precentral regions, whereas dose-dependent activation decreases were observed in anterior and posterior cingulate, precuneus, and middle frontal areas. Some areas (FEF/DLPFC/SMA) became more diffusely activated (i.e., increased in spatial extent) at the higher dose. Alcohol, thus, appears to have both global and local effects upon the neural correlates of the MVPT-R task, some of which are dose dependent. Hum. Brain Mapping 21:15,26, 2004. © 2003 Wiley-Liss, Inc. [source] A reference model approach to performance monitoring of control loops with applications to irrigation channelsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2005P. Zhang Abstract In this paper a new method for detection of oscillatory and sluggish controllers is developed. The method is aimed at control systems where rejection of measured load disturbances is the main control objective, and it is based on comparing the actual system output with the output of a reference model. A number of performance indicators are defined taking the most important factors from a control perspective into consideration. Based on the performance indicators, the performance of the control loops is evaluated. The developed method has been successfully applied to real data from an irrigation channel. The method correctly detected the control loops which needed retuning, and it provided useful information about several aspects of the control performance such as speed of response, oscillations and interactions between control loops. Copyright © 2005 John Wiley & Sons, Ltd. [source] Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in SimmentalJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2008M.H. Lidauer Summary A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region × year × month × parity effect and a random herd × test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking. [source] Age-related change in breeding performance in early life is associated with an increase in competence in the migratory barn swallow Hirundo rusticaJOURNAL OF ANIMAL ECOLOGY, Issue 5 2007JAVIER BALBONTÍN Summary 1We investigated age-related changes in two reproductive traits (laying date and annual fecundity) in barn swallows Hirundo rustica L. using a mixed model approach to di-stinguish among between- and within-individual changes in breeding performance with age. 2We tested predictions of age-related improvements of competence (i.e. constraint hypothesis) and age-related progressive disappearance of poor-quality breeders (i.e. selection hypothesis) to explain age-related increase in breeding performance in early life. 3Reproductive success increased in early life, reaching a plateau at middle age (e.g. at 3 years of age) and decreasing at older age (> 4 years). Age-related changes in breeding success were due mainly to an effect of female age. 4Age of both female and male affected timing of reproduction. Final linear mixed effect models (LME) for laying date included main and quadratic terms for female and male age, suggesting a deterioration in reproductive performance at older age for both males and females. 5We found evidence supporting the constraints hypothesis that increases in competence within individuals, with ageing being the most probable cause of the observed increase in breeding performance with age in early life. Two mechanisms were implicated: (1) advance in male arrival date with age provided middle-aged males with better access to mates. Yearling males arrived later to the breeding grounds and therefore had limited access to high-quality mates. (2) Breeding pairs maintaining bonds for 2 consecutive years (experienced pairs) had higher fecundity than newly formed inexperienced breeding pairs. 6There was no support for the selection hypothesis because breeding performance was not correlated with life span. 7We found a within-individual deterioration in breeding and migratory performance (arrival date) in the oldest age-classes consistent with senescence in these reproductive and migratory traits. [source] Evaluating the power of monitoring plot designs for detecting long-term trends in the numbers of common guillemotsJOURNAL OF APPLIED ECOLOGY, Issue 3 2006MICHELLE SIMS Summary 1In recent years concerns have been raised regarding the status of the common guillemot Uria aalge in the UK. Numbers have declined in several regions, highlighting the need for continued monitoring of this internationally important population. However, the extent to which the current monitoring scheme is capable of detecting declines and options for improving efficiency has received little attention. 2We investigated the power of different monitoring design options for detecting long-term trends in abundance at a colony of guillemots. The ability to detect trends in abundance was reduced by the large temporal and spatial variability in colony attendance. Taking a linear mixed model approach, we obtained details on the sources and sizes of the variance components using count data collected from monitoring plots on the Isle of May, Scotland, and assessed how best to allocate sampling effort in the light of the count variability structure. 3Our results indicated that trend detection will be improved by counting birds in more plots rather than by increasing the number of counts at existing plots. 4The revisit pattern of counts at the monitoring plots during the seasonal counting period had little effect on trend detection power. However, given the practical issues associated with counting guillemots, alternative revisit patterns to the current approach are preferred. 5For a fixed number of visits per plot, power is strongly influenced by the choice of revisit design if the day-to-day variation in colony attendance is increased. 6Synthesis and applications. Aspects of the UK seabird monitoring scheme can be improved. Changes to the allocation of sampling effort and the plot-revisit pattern will improve both the statistical power to detect long-term trends and the efficiency of conducting the survey. We stress the importance of considering the structure and magnitude of the count variation in a power analysis because judicious design decisions depend on the relative magnitude of these variance components. [source] Evaluating complex public health interventions: theory, methods and scope of realist enquiryJOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 6 2007James B. Connelly MD MSc FFPH Abstract The standard models used in the study of complex public health interventions are inadequate. They adopt a simple empiricist theoretical foundation and attempt to graft onto an essentially open social system a contrived laboratory experimentation typically in the form of a randomized, controlled trial. By understanding the ontological and epistemological claims of critical realism, it is possible to transcend the methodological inadequacy of the standard model approach. Critical realism posits a substantive causal theory, an end to fact-value dualism, and a coherent and emancipatory model of social action; all of these features amount to a systematic and compelling account of public health practice and a coherent approach to evaluation of complex public health interventions. [source] A common model approach to macroeconomics: using panel data to reduce sampling errorJOURNAL OF FORECASTING, Issue 3 2005William T. Gavin Abstract Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual country empirical models, for a wide variety of countries, often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macrodynamic behaviour of 15 OECD countries in terms of a small set of familiar, widely used core economic variables, omitting country-specific shocks. We find this small set of variables and a simple VAR ,common model' strongly support the hypothesis that many industrialized nations have similar macroeconomic dynamics. Copyright © 2005 John Wiley & Sons, Ltd. [source] Structural adjustment and soil degradation in Tanzania A CGE model approach with endogenous soil productivityAGRICULTURAL ECONOMICS, Issue 3 2001Henrik Wiig CGE model; Soil degradation; Economic growth; Structural adjustment Abstract In this paper, a model of the nitrogen cycle in the soil is incorporated in a Computable General Equilibrium (CGE) model of the Tanzanian economy, thus establishing a two-way link between the environment and the economy. For a given level of natural soil productivity, profit-maximising farmers choose input levels , and hence production volumes , which in turn influence soil productivity in the following years through the recycling of nitrogen from the residues of roots and stover and the degree of erosion. The model is used to simulate the effects of typical structural adjustment policies like a reduction in agro-chemicals' subsidies, reduced implicit export tax rate etc. After 10 years, the result of a joint implementation is a 9% higher Gross Domestic Product (GDP) level compared to the baseline scenario. The effect of soil degradation is found to represent a reduction in the GDP level of more than 5% for the same time period. [source] Neural Network Modeling of Constrained Spatial Interaction Flows: Design, Estimation, and Performance IssuesJOURNAL OF REGIONAL SCIENCE, Issue 1 2003Manfred M Fischer In this paper a novel modular product unit neural network architecture is presented to model singly constrained spatial interaction flows. The efficacy of the model approach is demonstrated for the origin constrained case of spatial interaction using Austrian interregional telecommunication traffic data. The model requires a global search procedure for parameter estimation, such as the Alopex procedure. A benchmark comparison against the standard origin constrained gravity model and the two,stage neural network approach, suggested by Openshaw (1998), illustrates the superiority of the proposed model in terms of the generalization performance measured by ARV and SRMSE. [source] Predicting the advent of ascites and other complications in primary biliary cirrhosis: a staged model approachALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 5 2010C.-W. CHAN Aliment Pharmacol Ther,31, 573,582 Summary Background, Current survival models for primary biliary cirrhosis have limited precision for medium and long-term survival. Aim, To describe a prognostic model for the advent of complications in primary biliary cirrhosis as the first approach to a staged prognostic model. Methods, From an established database of 289 consecutive primary biliary cirrhosis patients referred to Royal Free Hospital over 12 years (mean follow-up of 4.1 years), baseline characteristics at referral were evaluated by Cox-proportional hazards regression modelling. Results, The following complications occurred de novo: 85 ascites/peripheral oedema, 40 oesophagogastric varices, 63 encephalopathy, 29 spontaneous bacterial peritonitis and/or septicaemia, 59 symptomatic urinary tract infections. Age, albumin, log10(bilirubin), presence of ascites at referral, variceal bleeding within 6 weeks before referral, detection of oesophagogastric varices at or before referral were significant at multivariate analysis with different combinations and coefficients for each complication. The model for predicting ascites and/or peripheral oedema best fitted the observed data (ROC = 0.7682, S.E. = 0.0385). Conclusions, The known prognostic factors in primary biliary cirrhosis also model the advent of complications. In view of the prognostic importance of ascites and its more robust statistical model, ascites and/or peripheral oedema could represent, following validation, the most suitable staged model in primary biliary cirrhosis to improve precision in survival modelling. [source] Design of WiFi/WiMAX dual-band E-shaped patch antennas through cavity model approachMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 2 2010Heng-Tung Hsu Abstract The design of dual-band, single patch microstrip antenna covering 2.4 and 3.5 GHz for WiFi and WiMAX applications based on E-shaped patch is presented. Although cavity model analysis is included in the design procedure, the slotted configuration is treated as the perturbed cavity to characterize the resonant frequencies of corresponding modes. Additionally, the feed point position is determined through the field distribution resulted from the modal analysis. A new equivalent circuit model based on the coupled resonators theory is proposed for analysis purposes. The relationship between cavity model analysis and antenna resonances is further evidenced by the surface current distributions on the conductors. © 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52: 471,474, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24954 [source] A halo model of galaxy colours and clustering in the Sloan Digital Sky SurveyMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2009Ramin A. Skibba ABSTRACT Successful halo-model descriptions of the luminosity dependence of clustering distinguish between the central galaxy in a halo and all the others (satellites). To include colours, we provide a prescription for how the colour,magnitude relation of centrals and satellites depends on halo mass. This follows from two assumptions: (i) the bimodality of the colour distribution at a fixed luminosity is independent of halo mass and (ii) the fraction of satellite galaxies which populate the red sequence increases with luminosity. We show that these two assumptions allow one to build a model of how galaxy clustering depends on colour without any additional free parameters than those required to model the luminosity dependence of galaxy clustering. We then show that the resulting model is in good agreement with the distribution and clustering of colours in the Sloan Digital Sky Survey, both by comparing the predicted correlation functions of red and blue galaxies with measurements and by comparing the predicted colour,mark correlation function with the measured one. Mark correlation functions are powerful tools for identifying and quantifying correlations between galaxy properties and their environments: our results indicate that the correlation between halo mass and environment is the primary driver for correlations between galaxy colours and the environment; additional correlations associated with halo ,assembly bias' are relatively small. Our approach shows explicitly how to construct mock catalogues which include both luminosities and colours , thus providing realistic training sets for, e.g., galaxy cluster-finding algorithms. Our prescription is the first step towards incorporating the entire spectral energy distribution into the halo model approach. [source] Assessing fear of falling: Can a short version of the Activities-specific Balance Confidence scale be useful?MOVEMENT DISORDERS, Issue 12 2006Chava Peretz PhD Abstract We present the process of further validation of the 16-item Activities-specific Balance Confidence scale (ABC-16) and a short version (ABC-6) derived by us, to assess balance confidence and fear of falling (FOF). The ABC-16 was administrated to three groups who were anticipated to have a range of balance confidence: 70 patients with higher level gait disorders (HLGDs), 68 healthy controls, and 19 patients with Parkinson's disease (PD). Item reduction was based on identifying items with the lowest scores (high FOF) among the patients. Internal consistency and discriminative validity were assessed using Cronbach's alpha and logistic regression, respectively. The intraclass correlation (ICC) between the short and long versions was assessed using a mixed model approach, accounting for the difference between the scores of the two versions. Six items were found to reflect the most frightening conditions, especially in the patient groups, and to form the short version (ABC-6). Internal consistency of the ABC-16 and ABC-6 were high in the three groups: Cronbach's alpha was between 0.83 and 0.91 and 0.81 and 0.90, respectively. Compared to the control group, the sensitivity of the ABC-16 was 96% for identification of patients with HLGDs (greatest FOF) and 58% for identification of PDs (moderate FOF), based only on the ABC scores. Similar values were obtained for the short version, i.e., 91% for HLGDs and 53% for PDs. ICCs between the short and the long versions was 0.88 (HLGDs), 0.83 (PDs), and 0.78 (Controls). To conclude, the short version of the ABC has properties analogous to the parent questionnaire and is apparently useful in assessing FOF. © 2006 Movement Disorder Society [source] An investigation of tests for linearity and the accuracy of likelihood based inference using random fieldsTHE ECONOMETRICS JOURNAL, Issue 2 2002Christian M. Dahl Summary We analyze the random field regression model approach recently suggested by Hamilton (2001, Econometrica, 69, 537,73). We show through extensive simulation studies that although the random field approach is indeed very closely related to the non-parametric spline smoother it seems to offer several advantages over the latter. First, tests for neglected nonlinearity based on Hamilton's random field approach seem to be more powerful than existing test statistics developed within the context of the multivariate spline smoother approach. Second, the convergence properties of the random field approach in limited samples appear to be significantly better than those of the multivariate spline smoother. Finally, when compared to the popular neural network approach the random field approach also performs very well. These results provide strong support for the view of Harvey and Koopman (2000, Econometrics Journal, 3, 84,107) that model-based kernels or splines have a sounder statistical justification than those typically used in non-parametric work. [source] |