Author Response: Does it help?

Does the author response make any difference?

Well, we won’t be able to give a definitive answer to this, but in our empirical drive to run an evidence-based programme committee chairing, we have downloaded the review data at different time points from our conference management software, to seek data that could answer this question as best as we can thus far.

We downloaded the review data directly before the beginning the author response period (March 12) and recently, after reviewers had sufficient time to read and react to author responses (March 23).

Screen Shot 2017-03-27 at 1.13.29 PM

First some basic statistics.  For both long and short submissions, over 80% of submissions filed an author response, and 20% filing a direct-to-AC communication.

Does author response change the scores?  Setting aside the probably significant confounding variables, and attributing changes in score only to author response, the answer is yes: in about 15-20% of cases, scores changed.  Did they change for the better? On average, yesit does help to respond; we see a ~2:1 ratio of positive to negative changes in score (also, but probably not significant, if authors don’t respond, there’s a stronger tendency to have a negative score change).

I also wanted to know whether score change trends are significantly different with different initial scores. Definitely yes.  Faceting the results by quartile, there is a significant positive trend for submissions that have a high initial score, and a less strong negative trend for submissions that have a low initial score.  At the borderline cases (2nd and 3rd quartiles), the boost that author response yields is still there, but less strong.  We leave it to you to make conclusions as to the cause (we know, we are crunching only numbers, despite ACL being a NLP/CL conference — alas!).

Direct to Area Chair communication(a new feature we introduced this year) didn’t seem to have much effect — the pattern follows the author response fairly well.  We’ll decide whether this helped in the process or not, pending your opinions.

The fine print: the statistics are not exact.  Some papers had multiple score changes that averaged out to zero, so are present in the total but not the upward/downward trends.  Much of the score changes are probably due to consolidation between peer reviews and discussions initiated by the area chairs.  The average score change is a bit over 0.33 (as we have three reviewers, this is the minimal score change), but not enough to change most papers’ quartile rank.  We used quartiles specific for each paper format (long and short), as they are different.  The “(subset) w/ + direct-to-AC” column, is a subset of the middle column, as all submissions that filed a direct-to-AC text response also filed an author response.

As always, we welcome your insight and quest for deeper insight.  If you have specific questions or stats that you’d like to see, let us know by commenting or raising your voice on Twitter or Facebook on these ACL 2017 posts.

17 thoughts on “Author Response: Does it help?

  1. I assume the quartiles were determined by the initial scores. But then, if the author response primarily raised the scores of already highly rated reviews, what effect would it have on ultimate acceptance? Would it mean more papers are accepted, or just that the numerical bar would go up, and same papers (roughly) would get into the program?

    Liked by 1 person

    1. Well, considering that ‘already high’ scores in this case reach far as down as 2.9 for long papers and 2.6 for short papers (looking at the top 50%), a ‘high’ score is nowhere near acceptance yet.


  2. Thanks! One brief comment: Author responses and direct communication to ACs may affect the final decision without changing scores. Often reviewers will say they’re willing to increase or lower their scores, without doing so, but ACs take this into account.


    1. I agree. In my personal experience reviewer scores change mostly when reviewers read one anothers’ reviews / engage in discussion with each other. It seems less often the case that the reviews change in reaction to the author response (perhaps due to reviewers already having made up their minds by this late point in the process.)

      It would be great to have a controlled trial to test for the effect of the response explicitly. But I can’t see this happening. Instead having an option in START for the reviewers to flag if the response made them feel better/same/worse about the paper after reading it might be a reasonable surrogate.


    2. Hi Hal, this is a great link to your past post, and I really have to thank you for calling attention to it again. When we are at a bit more of a lull, I’d love to revisit this post and try to get stats on what you’ve described in your post. Both I and Regina have first hand perspectives of saving certain individual papers from reviewers that we believed were in the wrong, and I think our ACs had similar perspectives.

      We’ll be writing up a post about this in a while (when we’ve gotten some other issues ironed out), while it’s still fresh in our heads.

      And yes, totally agree with Trevor and Miles on their comments below too. If you have suggestions on what to do while we can crunch data for this, please suggest.


  3. Trevor has it. Also this discussion is from the perspective of acceptance. Do author responses make scores more uniform or help catch rogue reviewers?


  4. While the sub-score changes are interesting, I think the overall score is more relevant. How does the overall score change with the author-response?


  5. The softconf only shows that “At this time, there are no action items available for this submission.” How can I get the result of acceptance/rejection?


    1. Hi CX Li, all:

      If you still haven’t received any notification, please let us know. We have sent out everything and have also tried to reforward any bounce notifications…


Comments are closed.