Science 2.0 study

Updates on progress and discussions on results of Science 2.0: implications for European policies on research and innovation study

Science 2.0 is not just a passing fad: crowdsourcing the evidence

We’re approaching the final stage of our study. So far, we have  opened up our bibliography on our Mendeley group here; our notes through this very blog; our model for open science; and our draft policy recommendations for EU. And we’ve benefited from your comments and insight.

Now, we need your help to improve the evidence about the importance of Science 2.0, if we want policy-makers to take it seriously.

Therefore, we share the final presentation that we have presented to the European Commission, DG RTD here.

Help us improving it, by gathering more data and evidence, showing that Science 2.0 is important and disruptive, and that it’s happening already. In particular, we ask to share evidence and data on the take-up of Science 2.0: how many scientist are adopting it? With what benefits?

We ask all people interested in Science 2.0 to share the evidence at hand, by adding comments to the presentation. If you prefer, just leave a comment to this post.

Shape with us the future research priorities of the EU – comment on the draft policy recommendations

Science 2.0 provides new opportunities and challenges to the European Research Area. It’s not a matter of embracing or rejecting the shift, but rather to understand and design appropriate policy measures that will grasp the opportunities and overcome the challenges.

Approaching to the final phase of our study we have drafted an initial version of policy recommendations for the European Commission that we would like to discuss with  researchers, science2.0 evangelists, publishers, representatives of funding bodies, librarians and other interested parties. Therefore we have published the recommendations as a commentable document open for your suggestions, comments and add-ons.

Our recommendations are clustered around four challenges:

  • RESEARCHERS REPUTATION AND EVALUATION  – the supremacy of impact factor

Scientists are still following the old ‘publish or perish’ rule, frequently passing over the opportunities to be engaged in activities that do not ultimately result in a peer-review article. The career process is not inducive to sharing data and code, and to collaborate at early stage of the scientific process.

  • EU RESEARCH FUNDING – rigid funding instruments

Current research funding is mainly roadmap-based, and not conducive to open and serendipitous research activities which are confined to limited areas such as ERC and Fet-Open. Evaluation system narrowly focusses on articles and patents as research outcomes. 

  • SKILLS – lack of data and scientific literacy

There’s a need more and better data-literate scientists across all disciplines, as well as greater awareness and scientific literacy of citizens.

  • STANDARDS AND INFRASTRUCTURES – immature infrastructure and lack of standards

Without common standards for data management, the open access and open data policies cannot be scaled up. There’s a need for a stronger physical and institutional infrastructure for the growing amount of scientific data and publications to create a favourable environment to the development of science 2.0

CLICK HERE TO SEE AND COMMENT ON POLICY RECOMMENDATIONS

Note: These policy recommendations are based on desk research (see the references in our Mendeley group), interviews with stakeholders and case studies (the results of our research will be published together with the final version of the policy recommendations). They are also build upon existing recommendations such as:  the LiquidPub  project final recommendations and Surfboard for Riding the Wave report by Knowledge Exchange (Graaf & Waaijers, 2011).

This study takes a broader view to the full research cycle, beyond open access to scientific publications, which is a well analysed theme with clear policy recommendations already existing and embraced by policy-makers. These recommendations are therefore to be considered as additional to existing Open Access debate.

The pervasive metaphor of genome

“Genome” used to be a technical word used by scientists. It has become a metaphor, a paradigm of today.

The Human Genome Project was an unprecedented effort to carefully and patiently map the information of the Human Genome.

It refers to an inductive approach that rather than modeling and developing theories about the Genome, it adopted an inductive approach, collecting all available information and processing it. It’s THE classical big data project.

What is interesting is that Genome has now become a metaphor, extending the approach to research efforts in very different fields.

For instance, the Music Genome Project described all the possible features of the published music, and led to the creation of the Pandora music services. As Wikipedia describes,

A given song is represented by a vector (a list of attributes) containing approximately 400 “genes” (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400.

I now came across the Startup Genome Project , which analyzes a large amount of variables for understanding “the secret of success” of startups.

What these project share is an inductive method. Little theory, but focus on getting lots of data, and see what patterns emerge.

This is the new scientific method that Chris Anderson referred to in Wired. It’s extending from genetics to many other fields.

The emerging institutional setting of Science 2.0

There are lots of interesting stories about Science 2.0. But in the context of this study we are uncovering a far richer and more substantial infrastructure, that we consider the emergent self-organising institutional setting of Science 2.0.

First there is a self-regulation effort for open access. While many funding agencies are paying more attention to open access, the great surge in open access behaviour by research institutions is mainly due to self-regulation.

Secondly, there is an emerging meso-level infrastructure for coordinating this bottom-up effort. The market for “crowdsourcing” and “open innovation” solution is exploding: companies such as Innocentive, ChallengePost, and many others offer solutions for reaching out to a mass of potential innovators. Not only they offer the technological platform: they offer most importantly the process design, and the database of people. Recently, open source efforts have become available like Pybossa. Other nonprofit project include SciFundChallenge, which help citizens finding interesting challenges to participate in.

Interoperability standards are becoming available for example in the field of annotation, in order to facilitate data sharing and collaboration beyond the interoperability of bibliographies (which can now be considered a fait accompli ).

Culture is also changing, with increasing reward for scientists who share. Alternative metrics are being developed to measure reputation, such as AltMetrics and PeerProduction, as described in a previous post.

So probably what we need is not just old-style top-down policies and regulation on Science 2.0, but also a softer mix of tools, methodologies and people.

Open Access is not a luxury, it is a must-have for EU

Have a look and comment on  the Commissioner Neelie Kroes speech opening the PEER2012 conference.

The EC is working on including data sharing as a requirement for EU-funded project enlarging and on Recommendations for Member States on improving access, management and preservation of scientific results.

Openness in the Research Cycle

We’re looking for a model that enables us to describe the changes in the research process brought by Science.2.0. First, we have proposed division between open science, citizen science and data-intensive science.

Now, we have focused on the research cycle trying to capture different applications on different stages of research process. The inner cycle on the diagram below represents stages of the research process from the conceptualisation to the publication of a peer-reviewed article.

In the science.2.0 model the openness, principles of sharing and collaboration are (can be) present on every stage of the research process whereas in the traditional model, only result that is shared is the peer-reviewed article (often behind a paywall).

At the conceptualisation stage open discussions around ideas (blogs, fora) and knowledge sharing is important (open annotation, open bibliographies). Subsequently we have the stage of gathering data where data and research praxis can be shared in real-time (open data, open lab notebooks) and gathered in collaboration with citizens. In order to deposit data to enable further analysis we need eInfrastructures. Also in many instances the data can be analysed with the help of volunteers (citizen science) and open collaboration (collaborative analysis) . The analysis of data can be facilitated by sharing the open software. The outcome of analysis can be published as an article or a book chapter (which can be updated in an instance – liquid publications) but also as a statement accompanied with metadata that is linked with other statements (nanopublications). The article can be published in an open access journal or submitted to an institutional repository allowing wider accessibility. Data can be published  and linked to the article. Finally, publications are subject of the review by the academic community to establish the importance of the findings and filter the increasing number of scientific literature according to their relevance and significance for the field. Publications can be opened to post-peer reviews when the community openly discusses the importance of the discovery. Also other reputation systems, distinct from peer-review can be used to measure scientific excellence and author/publication impact (e.g. altmetrics).

What’s missing in our diagram?  What should be added/changed in order to better capture the Science2.0 phenomenon?

New ways to evaluate scientists

The main bottleneck to the adoption of science 2.0 paradigm is the lack of recognition. The career of the scientists is determined by papers, articles and citations, and there is no recognition for releasing data, code, or laboratory notebook. Moreover, effective reputation management tools can have a key role in finding the right micro-expertise to involve in large scale collaborative efforts.

As GrrlScientist puts it:

If there is no way to ensure that scientists get credit for their ideas and intellectual contributions, then they will not contribute to the Open Science movement. Traditionally, the way that credit has been assigned to scientists has been through publication of their data in peer-reviewed journals and by citing their colleagues’ work in their papers.

Michal Nielsen recognizes this as well in his book.

In our paper, we point out to the possibility of creating new ways of managing reputation, such as the Open Source example of IBM.

In our study, we’re trying to look for actual implementation of reputation management for scientist. So far we’ve come across:

PeerEvaluation, a service which helps scientists share their data and papers and thereby measure their reputation

Altmetrics, a service which maps the reputation of scientists by monitoring how people use their papers on CiteUlike, Menedeley, Zotero

This is very much related and overlapping with alternative ways to do peer-review, such as F1000.

However, these services remain highly experimental and there is little data about how they are used. Do you have any evidence of uptake and impact of alternative ways to evaluate scientists?

Open text, data and code

Just came across Ten Brighter Ideas (HT Jon Udell)

The tool behind it allows you to see the assumptions and calculations behind the recommendations for environment-friendly behavior.

It’s a case of:

– open text, where you are able to explore the rationale behind each statement

– open data, where you can see the data behind the calculation

– open code, as you can directly act on the code and modify the calculations

On top of it, it has a great design.

Our reference list is on Mendeley

Our reference list is on Mendeley

We have created a group on Mendeley to share the references we’re collecting during the desk research. See, join and add papers to our group

Is there a Yammer for scientist?

Cost of coordination are the main barrier to increased collaboration in science.

What are the best, most innovative software tools for enabling collaboration between scientists?

For example, is there a yammer for scientist?

Post Navigation