Rationalizing the extremes: introducing the citation distribution index

The distribution of citations among the scientific literature is in many respects similar to the distribution of wealth in the Western World: a handful of articles receive most of the citations, while most of the articles receive few or no citations at all. The distribution of citations is indeed highly skewed and not well represented by its average (the so-called “Bill Gates effect” in the wealth distribution analogy). In fact, when the average is computed…

Continue reading


1findr: discovery for the world of research

As of last week, 1science is offering public access to its 1findr service. 1findr is a discovery and analytics platform for scholarly research, indexing an incredibly wide breadth of peer-reviewed journals. But just how broad is its coverage, and how does 1findr compare to alternative systems? In this post, we’ll measure up 1findr against the (also quite new) Dimensions platform from Digital Science. These two platforms represent new approaches to bibliographic data: 1findr is fed using…

Continue reading


How you count counts

When the NSF’s Science & Engineering Indicators 2018 report was released, there was obviously some attention paid to the number of papers being published by China and the United States. “China declared world’s largest producer of scientific articles,” blared the headline in Nature. Scrutiny descended immediately on the indicators we had calculated, and some in the bibliometrics community warned of bias in the approach, distorting effects, and so on. In today’s post, we’ll explore a…

Continue reading


Positional analysis: from boring tables to sweet visuals

At Science-Metrix we are obviously very focused on data—sweet, sweet data! We are also very aware that bibliometric data or pages and pages of analysis can be overwhelming and that more intuitive data presentation can help our clients to better understand their study results, which in turn helps them to take action on the findings we return to them. One graphic presentation we find particularly helpful is the positional analysis chart. Positional analysis is a…

Continue reading


Impact assessment stories: decisions, decisions

It appears that the research & innovation policy community is not the only one struggling with demonstrations of societal benefit. In recent months, the federal Liberal government unveiled two online initiatives to increase government transparency, sharing information about government activities and outcomes. The challenges that these two platforms face amply demonstrate the difficulty of impact assessment. Those challenges are the same ones that face the science policy community, and this post explains how the shortcomings…

Continue reading


Mapping science: a guide to our Twitter series

Over the course of 2018, we’ll be publishing a series of maps via the Science-Metrix Twitter feed to visually illustrate some dynamics of the global science ecosystem. This blog post is the anchor for that series, explaining why we think these maps are important and what exactly they represent. Monoliths we aren’t When the data in the Science & Engineering Indicators 2018 showed China passing the United States in publication output for 2016, there was…

Continue reading


Budget 2018: The Evidence Budget

In our post last week on the 2018–19 Canadian federal budget, we looked at how the new spending on fundamental research addresses the calls for support from the Naylor report. But there were many more science stories in the budget as well. Beyond the dollar figures, there are important—if tacit—signals in the budget document about another key item from the science file in Canada: using evidence to build policy. Today’s post attempts to decipher those…

Continue reading


Contribution analysis: How did the program make a difference – or did it?

Contribution analysis is an evaluation method that was developed by John Mayne in the early 2000s to enable evaluators to produce rigorous impact analyses in the context of programs that cannot be evaluated using an experimental or quasi-experimental design. While the Science-Metrix Evaluation team has been conducting contribution analyses informally for a while now, the workshop conducted by Thomas Delahais (of Quadrant Conseil) for the SQEP annual conference in October 2017 inspired us to use this…

Continue reading


Budget 2018: the fundamental question of research funding

Science has been quite prominent on the Canadian political radar in recent years, and even became a regular talking point during the last federal election in 2015. During that campaign, the current Liberal government made four headline promises, and with the release yesterday of the 2018–19 federal budget, one of the key puzzle pieces fell into place: increased funding for fundamental research. In today’s post, we’ll assess the budget’s meaning for science in Canada. Follow…

Continue reading


Maximizing the use of evaluation findings

In a 2006 survey of 1,140 American Evaluation Association members, 68% reported that their evaluation results were not used. This suggests a serious need for evaluation results to make it off the bookshelf and into the hands of intended audiences. This week’s blog post looks at how we, as evaluators, can help maximize the use of the evaluations we produce. Noted evaluation scholar Michael Quinn Patton suggests that evaluators not only design evaluations with careful consideration…

Continue reading