Posts Tagged ‘peer review’

I still don’t like it

9 Feb 2014

I got a book in the mail this week, a book I hadn’t ordered and would never have ordered. The publisher sent me a complimentary copy, as I’d reviewed the book proposal last year. (It’s the one where the author refused to allow me to have an electronic copy.)

Actually, I soundly trashed the proposal in my review. In the nicest possible way, of course. For example, I said:

And then there are things that are just plain wrong. For example, “We then express our confidence in the H0 with a p-value, which might crudely be considered the probability that the H0 is true.” That is not a crude interpretation of the p-value; that is just wrong.

It seems like if a reviewer says, “This particular book should not be adopted,” the publisher can interpret that to also mean, “and whatever you do, don’t send me a copy.”

Advertisements

Complaints about the NIH grant review process

2 Oct 2013

Earlier this week, I met with a collaborator to discuss what to do with our NIH grant proposal, whose “A1” was “unscored” (ie, the revised version, and you don’t get a third try, received a “preliminary score” in the lower half and so was not discussed by the review panel and couldn’t be funded).

NIH proposals are typically reviewed by three people and given preliminary scores on five aspects (significance, approach, investigators, environment, innovation) and overall, and the top proposals based on those scores are discussed and scored by the larger panel.

One of the reviewers gave our proposal an 8 for “approach” (on a scale of 1-9, with 1 being good and 9 being terrible) with the following review comments:

4. Approach:
Strengths

  • Well described details for mining of [data] and genotyping of [subjects].

Weaknesses

  • There is no power analysis for Aim 2. Without knowing which and how many [phenotypes] will be evaluated it is not possible to estimate the statistical power.

Valid comments, but is that really all the reviewer had to say? What about Aims 1 and 3, or the other aspects of Aim 2? That is totally fucking inadequate.

Looking at this review again, I was reminded of how much I despise many aspects of the NIH review process. So it’s led me, finally, to write down some of the things that annoy me.
(more…)

Code review

25 Sep 2013

There was an interesting news item in Nature on code review. It describes a project by some folks at Mozilla to review the code (well, really just 200-line snippets) from 6 selected papers in computational biology.

There are very brief quotes from Titus Brown and Roger Peng. I expect that the author of the item, Erika Check Hayden, spoke to Titus and Roger at length but could just include short bits from each, and so what they say probably doesn’t fully (or much at all) characterize their view of the issue.

Titus is quoted as follows, in reference to another scientist who retracted five papers due to an error in his code:

“That’s the kind of thing that should freak any scientist out…. We don’t have good processes in place to detect that kind of thing in software.”

Roger is quoted at the end, as follows:

“One worry I have is that, with reviews like this, scientists will be even more discouraged from publishing their code…. We need to get more code out there, not improve how it looks.”

I agree with both of them, but my initial reaction, from the beginning of the piece, was closer to Roger’s: We often have a heck of time getting any code out of people; if we are too hard on people regarding the quality of their code, they might become even less willing to share.

On the one hand, we want people to produce good code:

  • that works
  • that’s readable
  • that’s reusable

And it would be great if, for every bit of code, there was a second programmer who studied it, verified that it was doing the right thing, and offered suggestions for improvement.

But, on the other hand, it seems really unlikely that journals have the resources to do this. And I worry that a study showing that much scientific software is crap will make people even less willing to share.

I would like to see the code associated with scientific articles made readily available, during the review process and beyond. But I don’t think we (as a scientific community) want to enforce rigorous code review prior to publication.

Later, on twitter, Titus took issue with the “not improve how it looks” part of what Roger said:

“.@kwbroman @simplystats @rdpeng Please read http://en.wikipedia.org/wiki/Code_review you are deeply, significantly, and completely wrong about code review.”

Characterizing code review as strictly cosmetic was an unfortunate, gross simplification. (And how code looks is important.)

I don’t have enough time this morning to really clean up my thoughts on this issue, and I want to get this out and move on to reading that dissertation that I have to get through by tomorrow. So, let me summarize.

Summary

We want scientific code to be well written: does what it’s intended to do, readable, reusable.

We want scientific code to be available. (Otherwise we can’t verify that it does what it’s intended to do, or reuse it.)

If we’re too hard on people for writing substandard code, we’ll discourage the availability. It’s an important trade-off.

Knuth: Journal referees should assist authors

8 Apr 2013

When serving as referee for a journal, who are you working for?

  • The editor: Will the paper add to the journal’s prestige?
  • The reader: Is it worth reading?
  • The author: How can it be improved?

I’d long thought that the referee’s duty was to the journal editors and then to the readers.

But Donald Knuth’s comments on refereeing persuaded me that I should focus primarily on helping the author to improve the manuscript.

See pages 31-35 (as numbered; actually 33-37 in the pdf) in his notes on mathematical writing. And here’s the missing page on “Hints for referees”.

Even a terrible manuscript can be published, if the author is sufficiently persistent. Your primary job as referee should be to help the author to make it as good as it can be.

Almost immediately after I first read Donald Knuth’s comments (back in 2002), I received one of the worst manuscripts I’ve ever read. It was one of those cases where I really wish the authors were anonymous, because I can’t forget who was responsible for it.

It was hard for me to say, “You have no idea what you’re doing” in a constructive way. (“You should abandon this manuscript“ is not constructive, but it could be good advice. The scientific literature could use a bit more self-censorship.)

And I’ve learned to use the “Comments to the editor” as my opportunity to vent. (I would pity the poor editor on the other end, but she/he sent the thing to me!) I’d give an example of my venting, but I think I’ll leave that to another time.

What a waste of paper

7 Mar 2013

A university press asked me to review a book manuscript, and the author “has asked that we not use electronic copies.” So they’re going to send me a hard copy.

My response: “If you only give me a paper copy, I’m going to just scan it and toss the paper. That seems like a waste of time (and paper and postage).”

I should probably have said, “Then forget it.”

Their response included, “What you do with it when it arrives I am just going to take a hear-no-evil approach to.”

Web-enabled publishing environment

23 Oct 2012

Karl Rohe has an interesting commentary in Amstat News this month, on how our current publishing system is obstructing research progress, and what a better future might look like.

What, no coffee?

28 Sep 2012

I was at a CIDR Access Committee meeting in DC a few weeks ago. We review proposals for genotyping or sequencing by the Center for Inherited Disease Research, a service funded by several of the NIH institutes.

We had to buy our own coffee.

It’s silly to complain. There was a coffee shop across the hall from the meeting room, and the coffee there was suitable.

But isn’t it silly to pay airfare for a dozen people for a 3 hr meeting and then chintz on the snacks? Apparently it’s a new government rule. (I’d thought the rule was maybe instituted following the GSA’s Las Vegas party, but it predates that.)

Without coffee, grant reviewing does not seem to go as well.

Positive comments on peer review

27 Apr 2012

We all complain about peer review, particularly when our best work is rejected by every journal from Nature Genetics down to that journal that will publish anything, so that it finally appears in a volume to honor some guy that only he will read.

However, sometimes an anonymous reviewer will identify an important flaw in a paper that you can fix before it’s published, thus saving you from potential public embarrassment.

That happened to me again today. I finally got the reviews back for a paper, eight weeks after it was submitted. I had become a bit impatient, but one of the reviewers identified a hole in our theory section, which we can now fix before publication (I think we figured it out this afternoon), thus avoiding public embarrassment, except for the fact that I’m currently pointing it out publicly.

Complaints about the peer review process are not unlike a common complaint about statisticians: that we are a barrier to scientists publishing what they know to be true. That is sometimes the case, but at other times, both reviewers and statisticians can help you to avoid embarrassing yourself.

I just refused an Elsevier review

10 Feb 2012

This afternoon I refused a request from the American Journal of Human Genetics to review a paper, though the abstract was extremely interesting. AJHG is published by Elsevier, and I’d signed the declaration at http://thecostofknowledge.com to not publish or review for Elsevier journals. If only AJHG were still with the University of Chicago Press…

Michael Eisen recently wrote:

The boycott isn’t perfect. I wish they hadn’t focused exclusively on Elsevier – they are hardly the only bad actors in the field. And it’s crucial that the focus be on papers. Nobody views turning down invitations to review to be a big sacrifice – and publishers will just find someone else. Same thing with editors. But papers are their lifeblood.

I agree with him. It’s easy to turn down a review. (I do so several times a week.)

And so I was feeling quite unsure about turning down the review, but also unsure about breaking the pledge regarding Elsevier. Nevertheless, I came down on the side of the pledge, and responded to AJHG with the following:

It sounds like an interesting paper, but…

I recently signed a public declaration to not publish or review for Elsevier journals (http://thecostofknowledge.com). I noticed at the time that Am J Hum Genet was published by Elsevier (if only it were still at U Chicago Press), which could be a problem.

I’m having second thoughts (especially in that refusing a review for this reason seems too easy…I say no to review requests almost every day), but for now I’m sticking to my promise.

I’m not sure whether it was the right decision.

I think the most important thing for me to do is to work to get Genetics to become open-access, or at least encourage discussion along those lines at the Genetics Society of America.

Paying for scholarly publications

2 Feb 2012

I have a couple of papers that I should be writing, but recent discussion (the whole PIPA/SOPA thing [see Michael Eisen’s OpEd in the New York Times]; the Elsevier boycott) has turned my thoughts to publishing generally.

So I’ll take some time out (way too much time out) to comment on the value and costs of publishing and peer review, how to pay for it, PubMedCentral, etc.
(more…)