Bibliometric Indicators (bibliometric + indicator)

Distribution by Scientific Domains


Selected Abstracts


Bibliometric Methods: Pitfalls and Possibilities

BASIC AND CLINICAL PHARMACOLOGY & TOXICOLOGY, Issue 5 2005
Johan A. Wallin
Bibliometric indicators are strongly methodology-dependent but for all of them, various types of data normalization are an indispensable requirement. Bibliometric studies have many pitfalls; technical skill, critical sense and a precise knowledge about the examined scientific domain are required to carry out and interpret bibliometric investigations correctly. [source]


Quantitative analysis of the scientific literature on acetaminophen in medicine and biology: a 2003,2005 study,

FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 2 2009
Claude Robert
Abstract This study quantifies the utilization of acetaminophen in life sciences and clinical medicine using bibliometric indicators. A total of 1626 documents involving acetaminophen published by 74 countries during 2003,2005 in the Thompson-Scientific Life sciences and Clinical Medicine collections were identified and analyzed. The USA leads in the number of publications followed by the UK, and industrialized countries, including France, Japan and Germany; the presence of countries such as China, India and Turkey among the top 15 countries deserves to be noticed. The European Union stands as a comparable contributor to the USA, both in terms of number of publications and in terms of profile of papers distributed among subcategories of Life Sciences and Clinical Medicine disciplines. All documents were published in 539 different journals. The most prolific journals were related to pharmacology and/or pharmaceutics. All aspects of acetaminophen (chemistry, pharmacokinetics, metabolism, etc.) were studied with primary interest for therapeutic use (42%) and adverse effects (28%) comprising a large part of publications focusing on acetaminophen hepatotoxicity. This quantitative overview provides as to the interest of the scientific community in this analgesic and completes the various review documents that regularly appear in the scientific literature. [source]


A cluster analysis of scholar and journal bibliometric indicators

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 10 2009
Massimo Franceschet
We investigate different approaches based on correlation analysis to reduce the complexity of a space of quantitative indicators for the assessment of research performance. The proposed methods group bibliometric indicators into clusters of highly intercorrelated indicators. Each cluster is then associated with a representative indicator. The set of all representatives corresponds to a base of orthogonal metrics capturing independent aspects of research performance and can be exploited to design a composite performance indicator. We apply the devised methodology to isolate orthogonal performance metrics for scholars and journals in the field of computer science and to design a global performance indicator. The methodology is general and can be exploited to design composite indicators that are based on a set of possibly overlapping criteria. [source]


Comparing bibliometric statistics obtained from the Web of Science and Scopus

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 7 2009
Éric Archambault
For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macrolevel bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high (R2 , .99). There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database. [source]


Are you an invited speaker?

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2009
A bibliometric analysis of elite groups for scholarly events in bioinformatics
Participating in scholarly events (e.g., conferences, workshops, etc.) as an elite-group member such as an organizing committee chair or member, program committee chair or member, session chair, invited speaker, or award winner is beneficial to a researcher's career development. The objective of this study is to investigate whether elite-group membership for scholarly events is representative of scholars' prominence, and which elite group is the most prestigious. We collected data about 15 global (excluding regional) bioinformatics scholarly events held in 2007. We sampled (via stratified random sampling) participants from elite groups in each event. Then, bibliometric indicators (total citations and h index) of seven elite groups and a non-elite group, consisting of authors who submitted at least one paper to an event but were not included in any elite group, were observed using the Scopus Citation Tracker. The Kruskal,Wallis test was performed to examine the differences among the eight groups. Multiple comparison tests (Dwass, Steel, Critchlow,Fligner) were conducted as follow-up procedures. The experimental results reveal that scholars in an elite group have better performance in bibliometric indicators than do others. Among the elite groups, the invited speaker group has statistically significantly the best performance while the other elite-group types are not significantly distinguishable. From this analysis, we confirm that elite-group membership in scholarly events, at least in the field of bioinformatics, can be utilized as an alternative marker for a scholar's prominence, with invited speaker being the most important prominence indicator among the elite groups. [source]


The place of serials in referencing practices: Comparing natural sciences and engineering with social sciences and humanities

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 8 2006
Vincent Larivière
Journal articles constitute the core documents for the diffusion of knowledge in the natural sciences. It has been argued that the same is not true for the social sciences and humanities where knowledge is more often disseminated in monographs that are not indexed in the journal-based databases used for bibliometric analysis. Previous studies have made only partial assessments of the role played by both serials and other types of literature. The importance of journal literature in the various scientific fields has therefore not been systematically characterized. The authors address this issue by providing a systematic measurement of the role played by journal literature in the building of knowledge in both the natural sciences and engineering and the social sciences and humanities. Using citation data from the CD-ROM versions of the Science Citation Index (SCI), Social Science Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) databases from 1981 to 2000 (Thomson ISI, Philadelphia, PA), the authors quantify the share of citations to both serials and other types of literature. Variations in time and between fields are also analyzed. The results show that journal literature is increasingly important in the natural and social sciences, but that its role in the humanities is stagnant and has even tended to diminish slightly in the 1990s. Journal literature accounts for less than 50% of the citations in several disciplines of the social sciences and humanities; hence, special care should be used when using bibliometric indicators that rely only on journal literature. [source]


Analysis of publications and citations from a geophysics research institute

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 9 2001
Cliff Frohlich
We here perform an analysis of all 1128 publications produced by scientists during their employment at the University of Texas Institute for Geophysics, a geophysical research laboratory founded in 1972 that currently employs 23 Ph.D.-level scientists. We thus assess research performance using as bibliometric indicators such statistics as publications per year, citations per paper, and cited half-lives. To characterize the research style of individual scientists and to obtain insight into the origin of certain publication-counting discrepancies, we classified the 1128 publications into four categories that differed significantly with respect to statistics such as lifetime citation rates, fraction of papers never-cited after 10 years, and cited half-life. The categories were: mainstream (prestige journal) publications,32.6 lifetime cit/pap, 2.4% never cited, and 6.9 year half-life; archival (other refereed),12.0 lifetime cit/pap. 21.5% never cited, and 9.5 years half-life; articles published as proceedings of conferences,5.4 lifetime cit/pap, 26.6% never cited, and 5.4 years half-life; and "other" publications (news articles, book reviews, etc.),4.2 lifetime cit/pap, 57.1% never cited, and 1.9 years half-life. Because determining cited half-lives is highly similar to a well-studied phenomenon in earthquake seismology, which was familiar to us, we thoroughly evaluate five different methods for determining the cited half-life and discuss the robustness and limitations of the various methods. Unfortunately, even when data are numerous the various methods often obtain very different values for the half-life. Our preferred method determines half-life from the ratio of citations appearing in back-to-back 5-year periods. We also evaluate the reliability of the citation count data used for these kinds of analysis and conclude that citation count data are often imprecise. All observations suggest that reported differences in cited half-lives must be quite large to be significant. [source]