Science 2.0 study

Updates on progress and discussions on results of Science 2.0: implications for European policies on research and innovation study

Archive for the category “Uncategorized”

Science 2.0 is not just a passing fad: crowdsourcing the evidence

We’re approaching the final stage of our study. So far, we have  opened up our bibliography on our Mendeley group here; our notes through this very blog; our model for open science; and our draft policy recommendations for EU. And we’ve benefited from your comments and insight.

Now, we need your help to improve the evidence about the importance of Science 2.0, if we want policy-makers to take it seriously.

Therefore, we share the final presentation that we have presented to the European Commission, DG RTD here.

Help us improving it, by gathering more data and evidence, showing that Science 2.0 is important and disruptive, and that it’s happening already. In particular, we ask to share evidence and data on the take-up of Science 2.0: how many scientist are adopting it? With what benefits?

We ask all people interested in Science 2.0 to share the evidence at hand, by adding comments to the presentation. If you prefer, just leave a comment to this post.

Advertisements

The pervasive metaphor of genome

“Genome” used to be a technical word used by scientists. It has become a metaphor, a paradigm of today.

The Human Genome Project was an unprecedented effort to carefully and patiently map the information of the Human Genome.

It refers to an inductive approach that rather than modeling and developing theories about the Genome, it adopted an inductive approach, collecting all available information and processing it. It’s THE classical big data project.

What is interesting is that Genome has now become a metaphor, extending the approach to research efforts in very different fields.

For instance, the Music Genome Project described all the possible features of the published music, and led to the creation of the Pandora music services. As Wikipedia describes,

A given song is represented by a vector (a list of attributes) containing approximately 400 “genes” (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400.

I now came across the Startup Genome Project , which analyzes a large amount of variables for understanding “the secret of success” of startups.

What these project share is an inductive method. Little theory, but focus on getting lots of data, and see what patterns emerge.

This is the new scientific method that Chris Anderson referred to in Wired. It’s extending from genetics to many other fields.

The emerging institutional setting of Science 2.0

There are lots of interesting stories about Science 2.0. But in the context of this study we are uncovering a far richer and more substantial infrastructure, that we consider the emergent self-organising institutional setting of Science 2.0.

First there is a self-regulation effort for open access. While many funding agencies are paying more attention to open access, the great surge in open access behaviour by research institutions is mainly due to self-regulation.

Secondly, there is an emerging meso-level infrastructure for coordinating this bottom-up effort. The market for “crowdsourcing” and “open innovation” solution is exploding: companies such as Innocentive, ChallengePost, and many others offer solutions for reaching out to a mass of potential innovators. Not only they offer the technological platform: they offer most importantly the process design, and the database of people. Recently, open source efforts have become available like Pybossa. Other nonprofit project include SciFundChallenge, which help citizens finding interesting challenges to participate in.

Interoperability standards are becoming available for example in the field of annotation, in order to facilitate data sharing and collaboration beyond the interoperability of bibliographies (which can now be considered a fait accompli ).

Culture is also changing, with increasing reward for scientists who share. Alternative metrics are being developed to measure reputation, such as AltMetrics and PeerProduction, as described in a previous post.

So probably what we need is not just old-style top-down policies and regulation on Science 2.0, but also a softer mix of tools, methodologies and people.

Open Access is not a luxury, it is a must-have for EU

Have a look and comment on  the Commissioner Neelie Kroes speech opening the PEER2012 conference.

The EC is working on including data sharing as a requirement for EU-funded project enlarging and on Recommendations for Member States on improving access, management and preservation of scientific results.

New ways to evaluate scientists

The main bottleneck to the adoption of science 2.0 paradigm is the lack of recognition. The career of the scientists is determined by papers, articles and citations, and there is no recognition for releasing data, code, or laboratory notebook. Moreover, effective reputation management tools can have a key role in finding the right micro-expertise to involve in large scale collaborative efforts.

As GrrlScientist puts it:

If there is no way to ensure that scientists get credit for their ideas and intellectual contributions, then they will not contribute to the Open Science movement. Traditionally, the way that credit has been assigned to scientists has been through publication of their data in peer-reviewed journals and by citing their colleagues’ work in their papers.

Michal Nielsen recognizes this as well in his book.

In our paper, we point out to the possibility of creating new ways of managing reputation, such as the Open Source example of IBM.

In our study, we’re trying to look for actual implementation of reputation management for scientist. So far we’ve come across:

PeerEvaluation, a service which helps scientists share their data and papers and thereby measure their reputation

Altmetrics, a service which maps the reputation of scientists by monitoring how people use their papers on CiteUlike, Menedeley, Zotero

This is very much related and overlapping with alternative ways to do peer-review, such as F1000.

However, these services remain highly experimental and there is little data about how they are used. Do you have any evidence of uptake and impact of alternative ways to evaluate scientists?

Open text, data and code

Just came across Ten Brighter Ideas (HT Jon Udell)

The tool behind it allows you to see the assumptions and calculations behind the recommendations for environment-friendly behavior.

It’s a case of:

– open text, where you are able to explore the rationale behind each statement

– open data, where you can see the data behind the calculation

– open code, as you can directly act on the code and modify the calculations

On top of it, it has a great design.

Our reference list is on Mendeley

Our reference list is on Mendeley

We have created a group on Mendeley to share the references we’re collecting during the desk research. See, join and add papers to our group

Is there a Yammer for scientist?

Cost of coordination are the main barrier to increased collaboration in science.

What are the best, most innovative software tools for enabling collaboration between scientists?

For example, is there a yammer for scientist?

Scientific evidence that gets better the more scientists use it

Just as for web services and collaborative public services, data sharing allows for post-scarcity quality gains the more people use it.

Each researchers’ data will get better the more other researches use them.

The analysis will get better as well.

As David (2011) puts it:

data-sets are not subject to being “over-grazed”, but instead are likely to be enriched and rendered more accurate the more that researchers are allowed to comb through them.

Growth of collaboration in science. Are social sciences catching up with natural science?

There are several studies showing that the article co-authorship (which is one of the indicators of collaboration) is much more popular in natural sciences than in humanities. Nevertheless, social science sees the growth of the co-authorship but mainly in quantitative studies (Moody, 2004).

Work of Larivière et al. (2006), which comes from an analysis of Canadian scientific articles  from 1980 to 2002 period, shows that almost all articles in the NSE are jointly published, compared with two thirds in the social sciences and about 10% in the humanities.

Finally, the study of Franceschet and Costantini (2010) on an Italian sample of articles suggests that collaboration is correlated positively with number of citations. However, hyper-authored articles (e.g. in physics) receive significantly less citation which may be due – according to the authors – to the fact that they are much faster becoming obsolete when compared with theoretical articles.

Why is the collaboration in humanities so uncommon? Can we explain it by the significance of the article which is much higher in NSE than in the humanities where single-authored books prevail?
Does it have an impact on transfer of ideas, theoretical consolidation?
And finally, should we induce collaboration in science?

Post Navigation