The main bottleneck to the adoption of science 2.0 paradigm is the lack of recognition. The career of the scientists is determined by papers, articles and citations, and there is no recognition for releasing data, code, or laboratory notebook. Moreover, effective reputation management tools can have a key role in finding the right micro-expertise to involve in large scale collaborative efforts.
As GrrlScientist puts it:
If there is no way to ensure that scientists get credit for their ideas and intellectual contributions, then they will not contribute to the Open Science movement. Traditionally, the way that credit has been assigned to scientists has been through publication of their data in peer-reviewed journals and by citing their colleagues’ work in their papers.
Michal Nielsen recognizes this as well in his book.
In our study, we’re trying to look for actual implementation of reputation management for scientist. So far we’ve come across:
– PeerEvaluation, a service which helps scientists share their data and papers and thereby measure their reputation
– Altmetrics, a service which maps the reputation of scientists by monitoring how people use their papers on CiteUlike, Menedeley, Zotero
This is very much related and overlapping with alternative ways to do peer-review, such as F1000.
However, these services remain highly experimental and there is little data about how they are used. Do you have any evidence of uptake and impact of alternative ways to evaluate scientists?