Science 2.0 study

Updates on progress and discussions on results of Science 2.0: implications for European policies on research and innovation study

Author Archive

The pervasive metaphor of genome

“Genome” used to be a technical word used by scientists. It has become a metaphor, a paradigm of today.

The Human Genome Project was an unprecedented effort to carefully and patiently map the information of the Human Genome.

It refers to an inductive approach that rather than modeling and developing theories about the Genome, it adopted an inductive approach, collecting all available information and processing it. It’s THE classical big data project.

What is interesting is that Genome has now become a metaphor, extending the approach to research efforts in very different fields.

For instance, the Music Genome Project described all the possible features of the published music, and led to the creation of the Pandora music services. As Wikipedia describes,

A given song is represented by a vector (a list of attributes) containing approximately 400 “genes” (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400.

I now came across the Startup Genome Project , which analyzes a large amount of variables for understanding “the secret of success” of startups.

What these project share is an inductive method. Little theory, but focus on getting lots of data, and see what patterns emerge.

This is the new scientific method that Chris Anderson referred to in Wired. It’s extending from genetics to many other fields.

The emerging institutional setting of Science 2.0

There are lots of interesting stories about Science 2.0. But in the context of this study we are uncovering a far richer and more substantial infrastructure, that we consider the emergent self-organising institutional setting of Science 2.0.

First there is a self-regulation effort for open access. While many funding agencies are paying more attention to open access, the great surge in open access behaviour by research institutions is mainly due to self-regulation.

Secondly, there is an emerging meso-level infrastructure for coordinating this bottom-up effort. The market for “crowdsourcing” and “open innovation” solution is exploding: companies such as Innocentive, ChallengePost, and many others offer solutions for reaching out to a mass of potential innovators. Not only they offer the technological platform: they offer most importantly the process design, and the database of people. Recently, open source efforts have become available like Pybossa. Other nonprofit project include SciFundChallenge, which help citizens finding interesting challenges to participate in.

Interoperability standards are becoming available for example in the field of annotation, in order to facilitate data sharing and collaboration beyond the interoperability of bibliographies (which can now be considered a fait accompli ).

Culture is also changing, with increasing reward for scientists who share. Alternative metrics are being developed to measure reputation, such as AltMetrics and PeerProduction, as described in a previous post.

So probably what we need is not just old-style top-down policies and regulation on Science 2.0, but also a softer mix of tools, methodologies and people.

New ways to evaluate scientists

The main bottleneck to the adoption of science 2.0 paradigm is the lack of recognition. The career of the scientists is determined by papers, articles and citations, and there is no recognition for releasing data, code, or laboratory notebook. Moreover, effective reputation management tools can have a key role in finding the right micro-expertise to involve in large scale collaborative efforts.

As GrrlScientist puts it:

If there is no way to ensure that scientists get credit for their ideas and intellectual contributions, then they will not contribute to the Open Science movement. Traditionally, the way that credit has been assigned to scientists has been through publication of their data in peer-reviewed journals and by citing their colleagues’ work in their papers.

Michal Nielsen recognizes this as well in his book.

In our paper, we point out to the possibility of creating new ways of managing reputation, such as the Open Source example of IBM.

In our study, we’re trying to look for actual implementation of reputation management for scientist. So far we’ve come across:

PeerEvaluation, a service which helps scientists share their data and papers and thereby measure their reputation

Altmetrics, a service which maps the reputation of scientists by monitoring how people use their papers on CiteUlike, Menedeley, Zotero

This is very much related and overlapping with alternative ways to do peer-review, such as F1000.

However, these services remain highly experimental and there is little data about how they are used. Do you have any evidence of uptake and impact of alternative ways to evaluate scientists?

Open text, data and code

Just came across Ten Brighter Ideas (HT Jon Udell)

The tool behind it allows you to see the assumptions and calculations behind the recommendations for environment-friendly behavior.

It’s a case of:

– open text, where you are able to explore the rationale behind each statement

– open data, where you can see the data behind the calculation

– open code, as you can directly act on the code and modify the calculations

On top of it, it has a great design.

Is there a Yammer for scientist?

Cost of coordination are the main barrier to increased collaboration in science.

What are the best, most innovative software tools for enabling collaboration between scientists?

For example, is there a yammer for scientist?

Scientific evidence that gets better the more scientists use it

Just as for web services and collaborative public services, data sharing allows for post-scarcity quality gains the more people use it.

Each researchers’ data will get better the more other researches use them.

The analysis will get better as well.

As David (2011) puts it:

data-sets are not subject to being “over-grazed”, but instead are likely to be enriched and rendered more accurate the more that researchers are allowed to comb through them.

Visual debate about pros and cons of science 2.0

We’ve created a visual discussion about the positive and negative impacts of science 2.0.

Please add your arguments!

Neelie Kroes speech on open science

You can comment it here

 

To make progress in science, we need to be open and share.

The British scientist Isaac Newton famously once said, along with many other luminaries over the years, “If I have seen further, it is by standing on the shoulders of giants”. That may seem rather modest for the man who is credited with so many lasting ideas. But indeed he was right: because he couldn’t have reached the astonishing results he did without accessing and learning from the work of others. Without the raw data, the technical innovations and the findings of people like Brahe, Copernicus, and Kepler. And of course of Galileo, once himself a member of this very Academy.

Does science 2.0 makes for better science?

This is one of the key questions we need to address.

Does openness at early stage actually improve the quality of the outputs? Should therefore researchers spend more time blogging and less doing research? Does it pay off in terms of quality of outputs to open up?

Maybe there are some specific contextual conditions under which it pays off. Which are these conditions? Basic research? Natural sciences? For young researchers?

Is there evidence showing this? Are there robust studies demonstrating that more open and collaborative scientists are more productive/insightful?

Post Navigation