26 December 2014

The science of asking

Holidays are a time for opening things (like presents)! Today, I want share my latest experiences in open science, involving the recent paper I co-authored (Byrnes et al., 2014) about science crowdfunding and #SciFund. I was very pleased to see it land on the front page of PLOS ONE when it was released:


While I’ve been a supporter of open access, I have never quite gotten on board with what some have called “open notebook” science: posting the data as you go. For me, there are too many distractions and dead ends in an ongoing project. I would much rather wait until I have a complete story, all ready to be tied up in a bow in a published paper.

The #SciFund project, however, was much different. I got involved shortly after round one closed. I think it was the morning after it ended. Someone (for the life of me, can’t find it) posted an initial analysis of the round one projects: how much money they’d raised, donors, the project description length, and so on. I took that, gathered even more data, and shared it as a spreadsheet on Google Docs. Someone (Jarrett Byrnes, I think) then took that data, and archived it on Figshare.

I did the same after round two. And again on round three. And four. I stayed up quite late a couple of times so that I could collect the social media data (tweets and Facebook likes) from the Rockethub website as soon as the projects closed out. And I archived all that data, again, on Figshare.

So this time, all the data was public from the start.

Then, Jarrett and I blogged about what we were seeing in the data on the #SciFund blog. For instance, here’s one by me comparing the three rounds, and here’s one where Jarret admits that the model he developed to explain success in Round 1 didn’t explain success in Rounds 2 and 3:

(M)y first response was to freak out a little bit on the inside. I mean, have we been wrong this entire time? Have I been talking out of my scientific butt? Or did I get something deeply and fundamentally wrong.

The published paper reminded me of how long in the making this thing was. It was submitted in the middle of June 2012, and was published December 2014. This is the second time this year I’ve had a paper that took well over two years to make it to print. Unlike the first case, which was due mostly to delays on the editorial side, the journal and the authors both contributed to this delay. First, like the Southwestern Naturalist situation, the PLOS ONE editor initially handling the article went AWOL on us, and we had to find a new editor. Second, we authors were not always prompt making our revisions. Coordinating four authors can be tricky, and I can testify that we all worked on this thing. There are no gift authorships here!

After we had submitted the manuscript to PLOS ONE for review, we had a reasonably complete draft of the manuscript, and we submitted it as a pre-print to the PeerJ pre-print server.

What did I learn from this experience with doing the analysis out in the open?

Journal pre-publication peer review still matters

Plenty of people had chances to comment on our work, particularly after it was deposited in the PeerJ pre-print server. We did get comments on the pre-print, but the journal reviewers’ comments were more comprehensive.

Maybe we just got lucky with our reviewer. But, others have also expressed the opinion that people are most liable to act as peer reviewers when they are being asked to do it for a journal.

Publishing a peer-reviewed journal article still matters

By the time the PLOS ONE paper came out, I’d spent several years blogging about #SciFund here at NeuroDojo, on the #SciFund blog, and talking a lot about it on Twitter and other social media. The pre-print is very similar to the final published PLOS ONE paper. I worried nobody would pay an attention to the PLOS ONE paper, because there was not a lot there that we had not already talked about. A lot, I’d thought.

Boy, was I wrong. The altmetrics for this article quickly rose, and are now tied with my article about post-publication peer review from much earlier this year.

The word of mouth was helped by Jai organizing a press release through University of California Santa Barbara. (I tried to interest my university in putting out a press release. Silence from them.) That helped generate a reasonable number of pointers to this article on Twitter.

We also tried making a video abstract. It has a couple of hundred views now, which is not horrible.


And we did a panel discussion the week after the paper was released, too:


Following the panel discussion, the #SciFund paper rated an article on the Nature website.

But even among my regular followers, people who I thought might have looked at the pre-print, were commenting on the published paper. The pre-print didn’t get the traction that the final published paper did.

An excellent example of this is that we had one detailed comment picking apart Figure 8 in the PLOS ONE paper. Someone could have made this comment at the pre-print stage – this is one of the usual arguments for making pre-prints available. But the PLOS ONE comments feature isn’t used that often, so my reaction to the criticism was kind of summed up thus:


We’re still trying to figure out how to respond formally. Should we try to issue an erratum to the figure? Just post a corrected figure in the comments? But here is a new version of the figure:


Being open and sharing data is a good thing to do. But my experience with this paper suggests to me that the “screw journals” approach is not ready for prime time yet. And this was a project that, in theory, should have been a good one to try the “just blog it all as you go” method of sharing science. #SciFund was born and raised online. It exists because social media exists. I would have expected this paper to have reached its audience well before the final PLOS ONE paper came out. But all the blogging, tweeting, and pre-prints did not equal the impact of the actual journal article.

I am pleased this article is finally out. But we still have analysis from round four of #SciFund, and we are starting to eye round five, so I don’t think this will be my last crowdfunding paper.

Reference

Byrnes JEK, Ranganathan J, Walker BLE, Faulkes Z. 2014. To crowdfund research, scientists must build an audience for their work. PLOS ONE 9(12): e110329. http://dx.doi.org/10.1371/journal.pone.0110329

External links

Crowdfunding 101 (Press release)
Crowdfunding works for science
Secret to crowdfunding success: build a fanbase
Do pre-prints count for anything?

Hat tip to Amanda Palmer, fellow traveler in crowdfunding, whose book title inspired this post’s title.

2 comments:

Unknown said...

This totally meshes with my experience. We have two preprints on biorxiv, and although someone appears to be downloading the PDF's we havent received comments or feedback on either. Both received thorough reviews at traditional journals. I guess there isn't a lot of incentive for someone to do a "real" review on a preprint. It would be interesting to compare the number of comments on the average biorxiv article to say, the average PLOS article (i am guessing very close to zero for both, while i would suspect the peer reviews would be much more substantial.

Unknown said...

I am biased, as I am a co-author with Zen on this paper, but this is a great analysis of the intellectual journey that our paper has taken.

Why were the comments that we got in the peer review process and on the journal website more comprehensive than the comments we had received earlier through the open notebook process? After all, as Zen mentioned, we spent years collecting and analyzing our data pretty much all out in the open. Why didn't we get a ton of really incisive feedback over the years? (I should say that we certainly got some useful feedback along the way.)

I think that there is a big reason why and I would guess that this reason is applicable fairly broadly, when it comes to open notebook science. For close to 100% of scientists, science crowdfunding is still not on the radar screen. Consequently, a manuscript or analysis about science crowdfunding isn't something that is going to get a lot of focussed attention, unless you "force" someone (called a reviewer) to pay attention.

I think the situation would have been very different if we had been working on a topic that is a current hot spot of activity in a scientific field. Then I am guessing we would have had a whole community of scientists picking apart our analysis from day one, with the community being the set of scientists with a professional interest in the given field.

So, I guess that open notebook science would generate a lot of useful back and forth if you are in a very populated part of current scientific interest, not so much if you are working on weird things, like Zen and I like to do. Jai