I am a fan of Altmetrics. At least in concept. But I starting to get very concerned with both the tools used to measure them and what the “numbers” are expected to indicate. We would expect that a high “number” in an Altmetric.com “donut” would be indicative, in some way, of the relative importance or “impact” of that article. One would hope it at least points to how well read the article is, whether the readers like the science and the potential for the article to, for example, move forward understanding or proliferate data into further usage. I am not sure this is true…at least for some of the articles I am involved with.
Let’s take for example the recent Zika Virus article that Sean Ekins led. The F1000 site gives us some stats in regards to Views and Downloads and the Metrics shows the Altmetric stats. I would assume that 48 DOWNLOADers would have at least some of them reading the article. Some of the VIEWers are likely to have read it and maybe printed it. For the Altmetric stats the 33 tweets are likely people pointing to the article and because of the way I use Twitter I am going to suggest that Tweets are less indicative of the number of readers of the article. There is a definition on the Altmetric site regarding how Twitter stats are compiled.
If we use the Altmetric Bookmarklet we can navigate to the page with a score
The score of “41” is essentially the sum of bloggers, tweets, Facebook posts etc. summarized below (1+1+1+33+1+3+1 for being on Altmetric.com???)
When I asked F1000Research via Twitter why they don’t show the “number” I appreciated their answer. I AGREE with their sentiment.
Yesterday I received an email about our Journal of Cheminformatics article “Ambiguity of non-systematic chemical identifiers within and between small-molecule dat\nabases“, part of which is shown below.
On the actual Journal of Cheminformatics page it says there have been 1444 accesses (not 2216 as cited in the email).
Also the Altmetric score is 8. So somewhere between 1400-2200 accesses (and it is safe to assume some proportion actual read it!). But it has a low Altmetric score of 8. This is versus an Altmetric score of >40 for the Zika Virus paper and a lot less accesses and probably a lot of the altmetrics for that article don’t necessarily indicate reads of the article as they are Tweets, many of them from the authors out to the world.
Using PlumX I am extremely disappointed regarding what it reflects about the JChemInf article! Only 10 HTML Views versus the 1400-2200 accesses reported above, and only 7 readers and 1 save! UGH. But 13 Tweets are noted so it seems so I would expect at least an Altmetric.com score of 13 or 14, instead of the 8 marked on the article?
I also tried to sign into ImpactStory to check stats but got a “Uh oh, looks like we’ve got a system error…feel free to let us know, and we’ll fix it.” message so will report back on that.
Altmetrics should be maturing now to a point where the metrics of reads, accesses, downloads should be fed into some overall metric. I think that reads/accesses/downloads should carry more weight than a Tweet in terms of impact of an article? At least if someone read it, whether they agree with it or not they are MORE aware of the content than if someone simply shared the link to an article, that then didn’t get read? The platforms themselves are so desync’ed in terms of the various numbers themselves that we must wonder how are things so badly broken? I would imagine that stats gathered in someway through CrossRef or ORCID will ultimately help this to mature but until then treat them all with a level of suspicion. I believe that AltMetrics will be an important part of helping to define impact for an article. But there is still a long way to go I’m afraid….
Showing 1 Reviews
This article brings up important weaknesses in altmetrics. Any system which tries to reduce something as complex as the "impact" of an article, publication, or scholar, to just a number or two is going to loose information. It is like taking the dot product of two vectors and getting a scalar. Vectors are full of information on their size and direction. Scalars are just scalars.
For example. Someone writes and publishes an article that many agree is garbage. 10 articles are written in response to that article each cites it. Is this article that 10 people refuted better science than an article cited only once?
Another example, which is better one single authored paper or ten 100 author papers? Being the sole author of at least some papers says something about a researcher, that they have the know how to take an idea from concept to publication and can handle all aspects of the craft we call science. On the other hand being part of collaborations proves one is a "team player" Which is better (I'd argue one wants to have some proof of both things.)
In any of those examples a single score metric will loose much of the texture of the situation forever.
In short I agree completely with this.
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.