Primary Interest (primary + interest)

Distribution by Scientific Domains


Selected Abstracts


Consistent Tests for Stochastic Dominance

ECONOMETRICA, Issue 1 2003
Garry F. Barrett
Methods are proposed for testing stochastic dominance of any pre,specified order, with primary interest in the distributions of income. We consider consistent tests, that are similar to Kolmogorov,Smirnov tests, of the complete set of restrictions that relate to the various forms of stochastic dominance. For such tests, in the case of tests for stochastic dominance beyond first order, we propose and justify a variety of approaches to inference based on simulation and the bootstrap. We compare these approaches to one another and to alternative approaches based on multiple comparisons in the context of a Monte Carlo experiment and an empirical example. [source]


Ratio estimators in adaptive cluster sampling

ENVIRONMETRICS, Issue 6 2007
Arthur L. Dryver
Abstract In most surveys data are collected on many items rather than just the one variable of primary interest. Making the most use of the information collected is a issue of both practical and theoretical interest. Ratio estimates for the population mean or total are often more efficient. Unfortunately, ratio estimation is straightforward with simple random sampling, but this is often not the case when more complicated sampling designs are used, such as adaptive cluster sampling. A serious concern with ratio estimates introduced with many complicated designs is lack of independence, a necessary assumption. In this article, we propose two new ratio estimators under adaptive cluster sampling, one of which is unbiased for adaptive cluster sampling designs. The efficiencies of the new estimators to existing unbiased estimators, which do not utilize the auxiliary information, for adaptive cluster sampling and the conventional ratio estimation under simple random sampling without replacement are compared in this article. Related result shows the proposed estimators can be considered as a robust alternative of the conventional ratio estimator, especially when the correlation between the variable of interest and the auxiliary variable is not high enough for the conventional ratio estimator to have satisfactory performance. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Quantitative analysis of the scientific literature on acetaminophen in medicine and biology: a 2003,2005 study,

FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 2 2009
Claude Robert
Abstract This study quantifies the utilization of acetaminophen in life sciences and clinical medicine using bibliometric indicators. A total of 1626 documents involving acetaminophen published by 74 countries during 2003,2005 in the Thompson-Scientific Life sciences and Clinical Medicine collections were identified and analyzed. The USA leads in the number of publications followed by the UK, and industrialized countries, including France, Japan and Germany; the presence of countries such as China, India and Turkey among the top 15 countries deserves to be noticed. The European Union stands as a comparable contributor to the USA, both in terms of number of publications and in terms of profile of papers distributed among subcategories of Life Sciences and Clinical Medicine disciplines. All documents were published in 539 different journals. The most prolific journals were related to pharmacology and/or pharmaceutics. All aspects of acetaminophen (chemistry, pharmacokinetics, metabolism, etc.) were studied with primary interest for therapeutic use (42%) and adverse effects (28%) comprising a large part of publications focusing on acetaminophen hepatotoxicity. This quantitative overview provides as to the interest of the scientific community in this analgesic and completes the various review documents that regularly appear in the scientific literature. [source]


A modelling strategy for the analysis of clinical trials with partly missing longitudinal data

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 3 2003
Ian R. White
Abstract Standard statistical analyses of randomized controlled trials with partially missing outcome data often exclude valuable information from individuals with incomplete follow-up. This may lead to biased estimates of the intervention effect and loss of precision. We consider a randomized trial with a repeatedly measured outcome, in which the value of the outcome on the final occasion is of primary interest. We propose a modelling strategy in which the model is successively extended to include baseline values of the outcome, then intermediate values of the outcome, and finally values of other outcome variables. Likelihood-based estimation of random effects models is used, allowing the incorporation of data from individuals with some missing outcomes. Each estimated intervention effect is free of non-response bias under a different missing-at-random assumption. These assumptions become more plausible as the more complex models are fitted, so we propose using the trend in estimated intervention effects to assess the nature of any non-response bias. The methods are applied to data from a trial comparing intensive case management with standard case management for severely psychotic patients. All models give similar estimates of the intervention effect and we conclude that non-response bias is likely to be small. Copyright © 2003 Whurr Publishers Ltd. [source]


Estimation of variance components due to imprinting effects with DFREML and VCE: results of a simulation study

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 1 2002
A. ESSL
Treating gametes as homozygous diploid individuals, TIER and SÖLKNER (Theor. Appl. Genet. 85: 868,872, 1993) proposed a method which manages the use of available computer programs with a common animal model to estimate variance components caused by imprinting effects. Despite some relevant model restrictions, this approach has already been used in some field data analyses by an adapted version of the widely used DFREML computer program, subsequently indicated by DFREMLa. The main objective of this study was to ascertain the properties of DFREMLa by computer simulation and to examine other alternative estimation approaches. The most important results may be summarized as follows: (1) Treating gametes as homozygous diploid individuals has the consequence that one-half of the actually realized gametic effect is totally abstracted in variance component estimation. Thus, an additional adjustment of the phenotypic variance calculated by DFREMLa is necessary to get correct values of estimated variance component ratios. (2) Adjusted DFREMLa estimates yielded correct results when animals were unselected and only maternal or paternal imprinting (not both simultaneously) occurred. (3) When the model did not adequately account for the additive genetic component within a maternal lineage, significant upward biases for the cytoplasmic component were observed. (4) The use of a simple dam and sire model with appropriate relationship matrices can be recommended when only the difference of maternal and paternal imprinting effects is of primary interest and the covariance between maternal halfsibs is not substantially increased by common environmental effects. (5) An adequate estimation of variance components for all possible imprinting situations requires the use of an animal model augmented by both maternal and paternal gametic effects. Unfortunately, a computer program on the basis of such a model does not yet exist. Schätzung von Varianzkomponenten für Imprintingeffekte mittels DFREML und VCE: Ergebnisse einer Simulationsstudie TIER and SÖLKNER (Theor. Appl. Genet. 85: 868,872, 1993) schlugen eine Methode zur Schätzung von imprintingbedingten Varianzkomponenten vor, die mit einem einfach zu adaptierenden Computerprogramm auf der Basis eines üblichen Tiermodells vorgenommen werden kann, indem sie Gameten wie homozygot diploide Individuen behandelten. Obwohl dieser Ansatz einige praxisrelevante Einschränkungen hat, wurde er bereits bei einigen Felddatenanalysen verwendet. Für diesen Zweck wurde eine entsprechend adaptierte Version des häufig verwendeten Computerprogrammes DFREML eingesetzt, die im folgenden mit DFREMLa bezeichnet wird. Das Ziel der vorliegen Untersuchung lag darin, die Eigenschaften von DFREMLa bei verschiedenen Imprintingsituationen zu überprüfen und weiters die Brauchbarkeit anderer möglicher Schätzansätze zu überprüfen. (1) Werden Gameten wie diploide homozygote Individuen aufgefaßt, dann geht bei der Schätzung von Varianzkomponenten mit DFREMLa eine Hälfte des tatsächlich wirksamen Gameteneffektes völlig verloren. Das heißt, die von DFREMLa ausgewiesenen Ergebnisse müssen nachträglich entsprechend adjustiert werden, um korrekte Schätzergebnisse für alle jene Quotienten von Varianzkomponenten zu erhalten, bei denen die gesamte phänotypische Varianz im Nenner steht. (2) Die adjustierten DFREMLa Schätzwerte lieferten in all jenen Fällen korrekte Ergebnisse, wo keine Selektion der Tiere erfolgte und entweder nur maternales oder paternales Imprinting (nicht beide gleichzeitig) auftrat. (3) Alle Modelle, bei denen die additiv genetische Komponente innerhalb einer Kuhfamilie keine adäquate Berücksichtigung fand, führten zu einer systematischen Überschätzung der zytoplasmatischen Varianzkomponente. (4) Ist nur jene Varianzkomponente von Interesse, die durch unterschiedlich starkes maternales bzw. paternales Imprinting erklärt werden kann, dann kann auch die Verwendung einfacher Vater-bzw. Muttermodelle empfohlen werden. Voraussetzung hierfür ist allerdings, daß die Kovarianz zwischen mütterlichen Halbgeschwistern durch keine gemeinsame Umwelt erhöht ist. (5) Eine für alle Imprintingsituationen problemadäquate Schätzung von Varianzkomponenten verlangt die Anwendung eines Tiermodelles, erweitert um beide imprintingbedingten Gameteneffekte. Leider fehlt gegenwärtig hierfür noch ein entsprechendes Computerprogramm. [source]


Functional and structural properties of stannin: Roles in cellular growth, selective toxicity, and mitochondrial responses to injury

JOURNAL OF CELLULAR BIOCHEMISTRY, Issue 2 2006
M.L. Billingsley
Abstract Stannin (Snn) was discovered using subtractive hybridization methodology designed to find gene products related to selective organotin toxicity and apoptosis. The cDNAs for Snn were first isolated from brain tissues sensitive to trimethyltin, and were subsequently used to localize, characterize, and identify genomic DNA, and other gene products of Snn. Snn is a highly conserved, 88 amino acid protein found primarily in vertebrates. There is a minor divergence in the C-terminal sequence between amphibians and primates, but a nearly complete conservation of the first 60 residues in all vertebrates sequenced to date. Snn is a membrane-bound protein and is localized, in part, to the mitochondria and other vesicular organelles, suggesting that both localization and conservation are significant for the overall function of the protein. The structure of Snn in a micellar environment and its architecture in lipid bilayers have been determined using a combination of solution and solid-state NMR, respectively. Snn structure comprised a single transmembrane domain (residues 10,33), a 28-residue linker region from residues 34,60 that contains a conserved CXC metal binding motif and a putative 14-3-3, binding region, and a cytoplasmic helix (residues 61,79), which is partially embedded into the membrane. Of primary interest is understanding how this highly-conserved peptide with an interesting structure and cellular localization transmits both normal and potentially toxic signals within the cell. Evidence to date suggests that organotins such as trimethyltin interact with the CXC region of Snn, which is vicinal to the putative 14-3-3 binding site. In vitro transfection analyses and microarray experiments have inferred a possible role of Snn in several key signaling systems, including activation of the p38-ERK cascade, p53-dependent pathways, and 14-3-3, protein-mediated processes. TNF, can induce Snn mRNA expression in endothelial cells in a PKC-, dependent manner. Studies with Snn siRNA suggest that this protein may be involved in growth regulation, since inhibition of Snn expression alone leads to reduced endothelial cells growth and induction of COP-1, a negative regulator of p53 function. A key piece of the puzzle, however, is how and why such a highly-conserved protein, localized to mitochondria, interacts with other regulatory proteins to alter growth and apoptosis. By knowing the structure, location, and possible signaling pathways involved, we propose that Snn constitutes an important sensor of mitochondrial damage, and plays a key role in the mediation of cross-talk between mitochondrial and nuclear compartments in specific cell types. J. Cell. Biochem. 98: 243,250, 2006. © 2006 Wiley-Liss, Inc. [source]


Factors affecting the uptake of new medicines in secondary care , a literature review

JOURNAL OF CLINICAL PHARMACY & THERAPEUTICS, Issue 4 2008
D. Chauhan BPharm MRPharmS MSc
Summary Background and Objective:, The rate of uptake of new medicines in the UK is slower than in many other OECD countries. The majority of new medicines are introduced initially in secondary care and prescribed by specialists. However, the reasons for relatively low precribing levels are poorly understood. This review explores the determinants of uptake of new medicines in secondary care. Methods:, Nine electronic databases were searched covering the period 1992,2006. Once the searches had been run, records were downloaded and those which evaluated uptake of new medicines in secondary care were identified. UK studies were of primary interest, although research conducted in other countries was also reviewed if relevant. With the exception of ,think pieces', eligibility was not limited by study design. Studies published in languages other than English were excluded from the review. Determinants of uptake in secondary care were classified using Bonair and Persson's typology for determinants of the diffusion of innovation. Results:, Almost 1400 records were screened for eligibility, and 29 studies were included in the review. Prescribing of new medicines in secondary care was found to be subject to a number of interacting influences. The support structures which exist within secondary care facilitate access to other colleagues and shape prescribing practices. Clinical trial investigators and physicians who sit on decision-making bodies such as Drug and Therapeutic Committees (DTCs) appear to have a special influence due to their proximity to their research and understanding of evidence base. Pharmaceutical representatives may also influence prescribing decisions through funding of meetings and academic detailing, but clinicians are wary of potential bias. Little evidence on the influence of patients upon prescribing decisions was identified. The impact of clinical guidelines has been variable. Some guidelines have significantly increased the uptake of new medicines, but others have had little discernible impact despite extensive dissemination. However given the increasing influence of the National Institute for Health and Clinical Excellence, guidelines may become more important. The impact of financial prescribing incentives on secondary care prescribing is unclear. Although cost and budget may influence hospital prescribing of new drugs, they are of secondary importance to the safety and effectiveness profile of the medicine. If a drug has a novel mechanism of action, or belongs to a class with few alternatives, clinicians are more likely to consider it favourably as a prescribing option. Conclusions:, Although price does not appear to be a primary factor behind prescribing decision-making, in secondary care there has long been a historical need for formal purchasing decisions through the DTC, which differentiates it from primary care. This, in addition to increasing pressures for cost-effectiveness within the NHS means that cost will appear more frequently on clinician consciousness. As a result, guidelines are more likely to be implemented using the strong professional networks in existence within secondary care, and although the influence of patients has not been addressed by the literature, they are likely to have an increasing input into the prescribing decision, given the importance of patient involvement in current UK policy. [source]


Facial Soft Tissue Depths in Craniofacial Identification (Part II): An Analytical Review of the Published Sub-Adult Data,

JOURNAL OF FORENSIC SCIENCES, Issue 6 2008
Carl N. Stephan Ph.D.
Abstract:, Prior research indicates that while statistically significant differences exist between subcategories of the adult soft tissue depth data, magnitudes of difference are small and possess little practical meaning when measurement errors and variations between measurement methods are considered. These findings raise questions as to what variables may or may not hold meaning for the sub-adult data. Of primary interest is the effect of age, as these differences have the potential to surpass the magnitude of measurement error. Data from the five studies in the literature on sub-adults which describe values for single integer age groups were pooled and differences across the ages examined. From 1 to 18 years, most soft tissue depth measurements increased by less than 3 mm. These results suggest that dividing the data for children into more than two age groups is unlikely to hold many advantages. Data were therefore split into two groups with the division point corresponding to the mid-point of the observed trends and main data density (0,11 and 12,18 years; division point = 11.5 years). Published sub-adult data for seven further studies which reported broader age groups were pooled with the data above to produce the final tallied soft tissue depth tables. These tables hold the advantages of increased sample sizes (pogonion has greater than 1770 individuals for either age group) and increased levels of certainty (as random and opposing systematic errors specific to each independent study should average out when the data are combined). [source]


Estimation of the Dominating Frequency for Stationary and Nonstationary Fractional Autoregressive Models

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2000
Jan Beran
This paper was motivated by the investigation of certain physiological series for premature infants. The question was whether the series exhibit periodic fluctuations with a certain dominating period. The observed series are nonstationary and/or have long-range dependence. The assumed model is a Gaussian process Xt whose mth difference Yt = (1 ,B)mXt is stationary with a spectral density f that may have a pole (or a zero) at the origin. the problem addressed in this paper is the estimation of the frequency ,max where f achieves the largest local maximum in the open interval (0, ,). The process Xt is assumed to belong to a class of parametric models, characterized by a parameter vector ,, defined in Beran (1995). An estimator of ,max is proposed and its asymptotic distribution is derived, with , being estimated by maximum likelihood. In particular, m and a fractional differencing parameter that models long memory are estimated from the data. Model choice is also incorporated. Thus, within the proposed framework, a data driven procedure is obtained that can be applied in situations where the primary interest is in estimating a dominating frequency. A simulation study illustrates the finite sample properties of the method. In particular, for short series, estimation of ,max is difficult, if the local maximum occurs close to the origin. The results are illustrated by two of the data examples that motivated this research. [source]


Interest in participating in clinical research: A study of essential tremor patients

MOVEMENT DISORDERS, Issue 1 2007
Elan D. Louis MD
Abstract Enrolling essential tremor (ET) patients in clinical research can be challenging. Investigators can maximize recruitment by targeting patient subgroups with greater interest in participation. Nothing has been published on factors that are associated with higher levels of interest in participation. The objective of this study was to identify factors associated with higher levels of interest in participating in clinical research on ET. A total of 149 ET patients were questioned about level of interest in participating in future research. Two questions were used, although one was of primary interest. Interest was rated from 0 to 10 (maximal). Data were collected on demographic factors, family history, and tremor-related disability. Tremor severity was assessed. The mean level of interest was 8.0 ± 2.3. Level of interest was not related to age of tremor onset, tremor duration, tremor severity, extent of tremor-related disability, or use of tremor medication. Level of interest was related to family history of tremor (P < 0.05), concern that other family members might develop tremor (P < 0.05), >2 versus 0 live births in women (P < 0.05), the view that the tremor worsens with age (P < 0.05), and presence of head tremor (P = 0.05). A variety of factors were identified that were associated with greater interest in participating in clinical research. These observations should be assessed in additional patient samples. Investigators may use our observations to identify and target patients for clinical trials and other research. © 2006 Movement Disorder Society [source]


Power and sample size considerations in clinical trials with competing risk endpoints

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 3 2006
Ellen Maki
Abstract In clinical trials with a time-to-event endpoint, subjects are often at risk for events other than the one of interest. When the occurrence of one type of event precludes observation of any later events or alters the probably of subsequent events, the situation is one of competing risks. During the planning stage of a clinical trial with competing risks, it is important to take all possible events into account. This paper gives expressions for the power and sample size for competing risks based on a flexible parametric Weibull model. Nonuniform accrual to the study is considered and an allocation ratio other than one may be used. Results are also provided for the case where two or more of the competing risks are of primary interest. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Annotation: Economic evaluations of child and adolescent mental health interventions: a systematic review

THE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, Issue 9 2005
Renée Romeo
Background:, Recognition has grown over recent years of the need for economic information on the impacts of child and adolescent mental health problems and the cost-effectiveness of interventions. Methods:, A range of electronic databases were examined using a predefined search strategy to identify economic studies which focused on services, pharmacological interventions and other treatments for children and adolescents with a diagnosed mental health problem or identified as at risk of mental illness. Published studies were included in the review if they assessed both costs and outcomes, with cost-effectiveness being the primary interest. Studies meeting the criteria for inclusion were assessed for quality. Results:, There are still relatively few economic evaluations in this field. Behavioural disorders have been given relatively greater attention in economic evaluations of child and adolescent mental health. These studies tentatively suggest child behavioural gains and parent satisfaction from parent and child training programmes, although the cost-effectiveness of the location of delivery for behavioural therapies is less clear. In general, the quality of economic evaluations was limited by small sample sizes, constrained measurement of costs, narrow perspectives and over-simple statistical and econometric methods. Conclusion:, Economic evaluations in the field of child and adolescent mental health interventions are few in number and generally poor in quality, although the number of studies being undertaken now appears to be rising. [source]


TESTICULAR TORSION: TIME IS THE ENEMY

ANZ JOURNAL OF SURGERY, Issue 6 2000
Patrick J. Dunne
Background: The acute scrotum is a diagnostic dilemma, and testicular torsion is of primary interest because of its fertility problems for the patient and medico-legal issues for the surgeon. The present study aimed to correlate operative findings of patients with suspected testicular torsion with certain clinical variables and investigations to see if diagnosis and outcome could be improved. Methods: A total of 99 patients underwent scrotal exploration for suspected testicular torsion at the Royal Brisbane Hospital between 1990 and 1995. Colour Doppler ultrasound, white blood count and urine microscopy results were documented, along with the patient's age and duration of testicular pain. Results: Fifty-six patients were found to have torsion, and the testicular loss rate was 23%. Patients who experienced testicular pain for longer than 12 h had a testicular loss rate of 67%. A negative urine microscopy was suggestive of testicular torsion, but was not diagnostic. The white blood count did not aid in the diagnosis. Colour Doppler ultrasound of the scrotum was used on nine occasions with three false negative results and a sensitivity of only 57%. Conclusions: Time is the enemy when managing the acute scrotum. No investigation substantially improves clinical diagnosis enough to warrant any delay in definitive surgical intervention. [source]


Focus group composition: a comparison between natural and constructed groups

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 2 2001
Julie Leask
Objective: To provide insight into the effects of focus group composition. Method: In an early phase of an ongoing study of parental reception to messages about childhood immunisation, we conducted four focus groups; two with participants who had never met before (constructed groups) and two with participants who were part of a preestablished first-time mothers' group (natural groups). Results: Marked differences were noted in the group dynamics, depth of interaction and diversity between groups. Discussions with constructed groups were animated, enthusiastic, expressed more divergent views and articulated greater complexities of the topic. Discussions with natural groups were generally flatter and less enthusiastic, displaying a higher level of apparent conformity to conventional wisdom. The need to protect other participants from potentially disturbing information about vaccination was expressed across groups but acted to censor natural groups, where participants knew more of each others' sensitivities. Implications: Insight into the factors contributing to such differences may enhance understanding of the contexts in which constructed groups are more appropriate. The processes of social censorship may be of primary interest to the researcher. However, where it is paramount to elicit a range of opinions about a potentially controversial topic, we suggest that natural groups in the delicate stage of norming be avoided. The peculiarities of each individual research circumstance are best explored in pilot studies. [source]


Modelling Students at Risk

AUSTRALIAN ECONOMIC PAPERS, Issue 2 2004
Diane M. Dancer
Using a sample of several hundred students we model progression in a first-year econometrics course. Our primary interest is in determining the usefulness of these models in the identification of ,students at risk'. This interest highlights the need to distinguish between students who drop the course and those who complete but who ultimately fail. Such models allow identification and quantification of the factors that are most important in determining student progression and thus make them a potentially useful aid in educational decision making. Our main findings are that Tertiary Entrance Rank (TER), mathematical aptitude, being female and attendance in tutorials are all good predictors of success but amongst these factors only attendance is significant in discriminating between students who fail and those who discontinue. Also, there are differences across degree programs and, in particular, students in Combined Law are very likely to pass but, if they are at risk, they are much more likely to discontinue than to fail. [source]


Flexible Estimation of Differences in Treatment-Specific Recurrent Event Means in the Presence of a Terminating Event

BIOMETRICS, Issue 3 2009
Qing Pan
Summary In this article, we consider the setting where the event of interest can occur repeatedly for the same subject (i.e., a recurrent event; e.g., hospitalization) and may be stopped permanently by a terminating event (e.g., death). Among the different ways to model recurrent/terminal event data, the marginal mean (i.e., averaging over the survival distribution) is of primary interest from a public health or health economics perspective. Often, the difference between treatment-specific recurrent event means will not be constant over time, particularly when treatment-specific differences in survival exist. In such cases, it makes more sense to quantify treatment effect based on the cumulative difference in the recurrent event means, as opposed to the instantaneous difference in the rates. We propose a method that compares treatments by separately estimating the survival probabilities and recurrent event rates given survival, then integrating to get the mean number of events. The proposed method combines an additive model for the conditional recurrent event rate and a proportional hazards model for the terminating event hazard. The treatment effects on survival and on recurrent event rate among survivors are estimated in constructing our measure and explain the mechanism generating the difference under study. The example that motivates this research is the repeated occurrence of hospitalization among kidney transplant recipients, where the effect of expanded criteria donor (ECD) compared to non-ECD kidney transplantation on the mean number of hospitalizations is of interest. [source]


Semiparametric Models for Cumulative Incidence Functions

BIOMETRICS, Issue 1 2004
John Bryant
Summary. In analyses of time-to-failure data with competing risks, cumulative incidence functions may be used to estimate the time-dependent cumulative probability of failure due to specific causes. These functions are commonly estimated using nonparametric methods, but in cases where events due to the cause of primary interest are infrequent relative to other modes of failure, nonparametric methods may result in rather imprecise estimates for the corresponding subdistribution. In such cases, it may be possible to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimators. The cumulative incidence estimators so obtained are simple to compute and are considerably more efficient than the usual nonparametric estimator, particularly with regard to interpolation of cumulative incidence at early or intermediate time points within the range of data used to fit the function. More surprisingly, they are often nearly as efficient as fully parametric estimators. We illustrate the utility of this approach in the analysis of patients treated for early stage breast cancer. [source]


On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon

COGNITIVE SCIENCE - A MULTIDISCIPLINARY JOURNAL, Issue 4 2009
Jeffrey L. Elman
Abstract Although for many years a sharp distinction has been made in language research between rules and words,with primary interest on rules,this distinction is now blurred in many theories. If anything, the focus of attention has shifted in recent years in favor of words. Results from many different areas of language research suggest that the lexicon is representationally rich, that it is the source of much productive behavior, and that lexically specific information plays a critical and early role in the interpretation of grammatical structure. But how much information can or should be placed in the lexicon? This is the question I address here. I review a set of studies whose results indicate that event knowledge plays a significant role in early stages of sentence processing and structural analysis. This poses a conundrum for traditional views of the lexicon. Either the lexicon must be expanded to include factors that do not plausibly seem to belong there; or else virtually all information about word meaning is removed, leaving the lexicon impoverished. I suggest a third alternative, which provides a way to account for lexical knowledge without a mental lexicon. [source]