Dear all:

We are delighted to have Xavier and Kevin, co-chairs of the recent EMNLP 2016, to write a post detailing key difficulties that they faced in this past year’s conference organization.  If you have a burning issue with how we (or ACL in general) puts together its scientific program, please contact us so we can consider a guest post from you.  Now, on to our regularly scheduled program…
– Regina and Min

Thanks to Regina and Min-Yen for letting us participate in this great blog. We’re the program chairs for EMNLP 2016. Regina and Min-Yen asked us to comment about the issue of “Reject before review”, which will hopefully be helpful for authors.

At EMNLP 2016 we had to reject 46 papers without review, out of 1,185 initial submissions, which represents nearly a 4%. And let us stress that we had to, and we are sorry for it because we are aware of the hard work behind each submission. We are somewhat used to reject papers with reviews, based on their content, but we were not expecting having to reject so many papers before review, based on the form.

There are three main categories of violations to the instructions: non-anonymous submissions, over-length submissions, and violations to the official style files. The first category is rare, easy to validate, and is often correlated with formats which are completely different from the official ones. The other two categories are much more frequent than we thought. For over-length papers, we observed a continuum of extra content: one or two lines are probably caused by a last-minute-edit mistake, for one extra paragraph the authors perhaps thought we would be flexible, for a full extra column, page or figure, the authors did not take the instructions seriously. Formatting issues are the most problematic, because they include the cases of intentionally hacking the styles to fit more content, and in general the program committee does not have designated experts who can judge if the layout and formatting of a document comply with the instructions. Without technical guidance, it is very difficult to distinguish between good and bad cases. If submissions that have clear violations are kept, even if the violations are mild, the reviewers will inevitably be distracted by them.

Why were we strict about rejecting papers that violated the formatting instructions? Because we wanted to be fair to the authors who have followed the instructions. How would you feel as an author if you’ve spent precious time trimming your submission and sacrificing useful results and explanations due to space, and then discover later that your paper is being ranked against other papers that did not adhere to the same rules? That would not feel like a fair comparison, right?

Over the last years, the number of submissions in conferences increased, while the reviewing periods remain very tight. The role of program chairs is to ensure a high-quality and smooth review process, and this means we had to focus on the 96% of papers which were correctly formatted. It would have been too time-consuming to keep the remaining 4%. That would have been extra burden for the reviewers and area chairs. And this is why we had to make use of our right to reject papers before review.

If you are an author, please read carefully the submission instructions, and in particular always use the official style files for that conference (the ACL style files keep changing in subtle ways for every conference). Instruct your younger co-authors about the importance of this. Even if in the past the program committees were more permissive about certain hacks, the volume of submissions today does not give any room for distractions.

And, if certain instructions cause you a problem, you should contact the program chairs way before the submission deadline. Notably, at EMNLP 2016 we did not provide a template for MS Word, but we got many submissions that used the ACL 2013 MS Word template, which had significant differences that affect the amount of content. Whether to distribute or not templates for MS Word, and perhaps other word processors, is a matter of having enough volunteers in the organizing team who can deal with the extra work.

On another front, let us comment on a common pattern that directly challenges the double-blind nature of our review process. Note we do not want to start a debate about double blind reviewing (see, for example, These days, many people post their submission on after the submission deadline. Some authors even start posting on Facebook and Twitter about the importance of their newest findings. In most cases there are clear mentions that the paper is under review at a conference. The multiple submission policy of our conferences, including ACL 2017, says that Authors must state in the online submission form the name of the workshop or preprint server and title of the non-archival version.”, which does not strictly rule out posting it after the submission. And many people do, which by design triggers notifications and email alerts throughout the network, precisely during the weeks where the paper reaches the reviewers and therefore the anonymity of the paper is specially relevant. The current policy says that “Reviewers are free to do what they like with this information”. At this point, we should perhaps recommend reviewers that if they really want to have an unbiased opinion of the papers they review, they should not check social media and they should unsubscribe from the daily arXiv email updates, just for a few weeks. Alternatively, authors could keep their submitted papers anonymous at all levels, just for a few weeks. Which one makes more sense?

To summarize: “Reject before review” is something nobody wants. It is extra work for program chairs, and it is a disappointing result to the authors. Just adhere to the instructions. Don’t take your chances. Everyone will be happier.

Xavier Carreras and Kevin Duh
EMNLP 2016 Program Chairs

Posted by

I'm a research scientist interested in Natural Language Processing and Machine Learning. I work at Xerox Research Centre Europe (France).


  1. Program chairs and other conference organizers deserve much gratitude for the work they do for the community, and they of course have the right to reject papers that do not conform to the formatting instructions if that makes their work load a little lighter. However, I think policing for formatting hacks or even automatically rejecting papers that go 1 or 2 lines over the page limit is a bit over the top. We want novel, relevant, exciting, thorough, thought-provoking, creative, reproducable work to be presented at ACL, with appropriate credit giving to prior work and using an agreed format. All of these criteria are important, and hardly any paper meets all of them. A zero-tolerance policy w.r.t. formatting instructions unfairly singles out one criterion and hurts efforts to encourage more creative work and submissions from neighboring fields.


    1. Hi Willem,

      Thanks for sharing your comment!

      I agree with you that it is important to judge a paper by its contents (with the criteria you mentioned: novelty, relevance, etc.) and not by its form. That is the reviewer’s role.

      The PC’s role is to ensure that the review process is fair and the acceptance/rejection decision is reasonable. Comparing papers of different lengths will likely increase the variance of the review scores, because reviewers have different thresholds on what they find acceptable. We have a very competitive conference, and slight variations may mean the difference between acceptance and rejection. That is why it is important from the PC’s perspective to make sure that all papers going to reviewers follow the same agreed-on format.

      You’re right in that a zero-tolerance policy might prematurely reject submissions with good content. But I am hopeful that these papers will be resubmitted to the next conference, following the instructions. There are many ways to write an excellent paper within the page limit. In the long run, we will have a smoother review process if all submissions abide by the instructions.

      Finally, I should note that PC’s may allow for a little bit of slack in practice. But as an author, one should just assume a zero-tolerance policy and adhere to the instructions. The reason we wrote this blog post is to make sure this point is loud and clear for the future.


  2. Hi Xavier and all,
    thanks for your work as a chair in many conferences, I can only imagine how tedious it can be sometimes.

    I can understand the strict format policy but in those days when most of our conferences allow for an additional page for the camera ready and often 2 pages for supplementary material, what’s the need of rejecting potentially interesting papers if they cross a bit the nb of pages limit? Especially since a lot of papers (coming most of the time from well established research team (company or academic) don’t care at all about double blind reviewing?
    The fact that a somewhat ok paper has been discussed and advertized on Twitter via a arXiv submission can get through while a challenging paper that would cross the limit by a small margin will not is at best questionnable.
    Not to mention the other fact that a paper previously published at a workshop would be rejected from xCL even though a) the work was extended b) this rejection contradicts the very own acl policy on double submission. Yet, pre-publications on arXiv are allowed and the burden to disconnect from social medai is put on the reviewers’ shoulders. I think something needs to be fixed or at least discussed vividly at the next acl board meeting.

    Djamé Seddah


    1. Hi Djamé,

      Regarding your question, “What’s the need of rejecting potentially interesting papers if they cross a bit the nb of pages limit” — It’s basically about fairness of the review process. Please see my reply to Willem for the rationale in more detail.

      But you make a good point about double-blind reviewing, arXiv, and social media. These impact author anonymity in the review process. I think this is a distinct issue from the issue of adhering to the page limit and style files, but it also affects the fairness of the review process.

      I totally agree with you that this needs to be discussed more. Do we still value double-blind reviewing as much as before? Or do we value more the fast and wide dissemination of our research by pre-publishing and social media? Is there a way to make both work, and play well with other research communities too? I think the current ACL policy need not be the final version, and collectively with community discussion we could perhaps make it even better.


      1. Hi Kevin,
        thanks for your answer. I’m still not convinced that those issues (paper form vs authors anonymity respect) are distinct: according to your standards, if someone writes his few-line conclusion on the n+1 page, he’s rejected. If he forgets to remove his name from the paper, he’s out. But if he publishes a longer version of his paper with his name on ArXiv, got feedback from the crowd (including many potential reviewers and area chairs) and then publish a somewhat denser/shorter version at ACL, he’ll get in even though everyone knows that the submitted paper is less clear and who is the author.
        I’m sorry but this is much more important than a few more lines on a paper (which will anyway have one more page for the final version, so what’s the point), this harms the principle of double blind peer reviewing and make publishing look like a popularity contest at the end of the day.
        The trouble with fast and sometimes hectic dissemination via social media is that the notion of what matters, what is likely to have impact on field, fades compared to the buzz this or that must causes. Seriously, look at the recent thread discussion on Twitter regarding the lack of reproducibility of neural MT models, the misleading assessment in some papers regarding their neural whatever parsers “without feature engineering” but with a whole unmentioned java class for OOVs treatment, the list goes on. I think that the field needs more time to digest all this wave of works and definitely doesn’t need to comply to the agenda imposed by some companies’ PR department.



  3. Xavier and Kevin, thank you for being public about some of the sausage-making aspects of the process of running a conference. It’s a good thing for people who haven’t done it to get a feeling for what some of the things are that make it pretty difficult sometimes–in this case, returning colleagues’ papers unreviewed.

    I’d like to add something to your comment that

    > Why were we strict about rejecting papers that violated the formatting instructions? Because we wanted to be fair to the authors who have followed the instructions.

    Certainly being fair to the other authors is super-important. Something else that’s important: being fair to the reviewers.

    When a reviewer gets a paper that in some obvious way is outside of the requirements of the meeting, whatever those may be, it puts them in a bad position, in a number of ways. The most practical one is that it takes up extra reviewing time, and it does so in an environment in which most of the people I know feel over-burdened with reviewing duties, not under-utilized. When you get a paper that’s in some way non-conforming to the conference standards, you basically have two choices: (a) reject the paper out of hand, or (b) take the time to write an email to the conference chairs spelling out exactly what the nature of the problem is. In addition, there’s the time that you put into deciding between (a) and (b), and before you decide between (a) and (b), there’s the issue of deciding whether to follow up on your question about how to handle the situation, versus just ignoring it and leaving it to the publication chair to figure out.

    There are lots of things that could be added to this:

    – The position that the conference chairs get put in when reviewers come to them with this kind of question
    – The position that the publication chair gets put in if the reviewers don’t mention it to anyone
    – The issue of fairness to the other authors that you pointed out in your post
    – The basic fact, as I mentioned, pretty much everyone is just plain over-taxed with reviewing these days, what with an increase in the number meetings and an increase in the number of journals

    So, I guess that one could argue with your position on returning non-conforming papers unreviewed–clearly, the other commenters disagree with your decision, and certainly it’s the case that we want to be getting exciting, creative submissions, and ultimately, we’d like to see more work in the field, not less, even if that costs us more reviewing. But, there are also arguments that support your position, and I count quite a few of them.


  4. I generally agree with all points, they are hard but some limits have to exist.

    What I however don’t understand is the concept of pre-publishing at arXiv:
    – I can publish there whatever I want, without a review;
    – the people should cite it because it is then publicly known;
    – blind reviewing does not make sense anymore => a priori rejecting any non-anonymised submissions does not make sense anymore
    – why to (try to) publish the arXiv stuff at a conference at all?

    Liked by 1 person

    1. I don’t have a clear opinion about whether having papers available in arXiv during the reviewing period should be permitted or not. Certainly there are good arguments on both sides. But there is something where I do have a clear opinion: I don’t agree with the statement that, if arXiv is allowed, then “blind reviewing does not make sense anymore”.

      I have seen this argument pop up several times, but I think it fails to consider one of the advantages of blind review. Blind review is not only about preventing positive bias when you see a paper from an elite university, it’s also about the opposite: preventing negative bias when you see a paper from someone totally unknown. Being a PhD student from a small group in a little known university, the first time I submitted a paper to an ACL conference I felt quite reassured by knowing that the reviewers wouldn’t know who I was.

      In other words, under an arXiv-permissive policy, authors still have the *right* to be reviewed blindly, even if it’s no longer an obligation because they can make their identity known indirectly via arXiv+Twitter and the like. I think that right is important. So the dilemma is not a matter of “either we totally forbid dissemination of the papers before acceptance in order to have pure blind review (by the way, 100% pure blind review doesn’t exist anyway because one often has a hint of whom the authors may be, and this is true especially of well-known authors) or we throw the baby out with the bathwater and dispense with blind review altogether”. I think blind review should be preserved at least as a right for the author (as it is know), and the question is whether it should also be an obligation or not.

      Liked by 1 person

  5. Great post, thank you!

    One note regarding rejects without review for the ACL 2017 chairs:

    In my view, it would be really nice to notify the authors of such submissions as soon as possible, rather than at the very end of the normal review process.

    I have recently had the pleasure of eagerly awaiting the reviews for two months, only to discover that my paper didn’t even enter the review process as it had a few extra lines. Thinking back, the worst thing about that experience wasn’t the reject, it was the time lost waiting, that could’ve been better spent improving the paper and submitting it elsewhere. 🙂

    Thanks again.


    Liked by 1 person

  6. Hi Zeljko,

    I hope you’re doing well.

    Obviously, the sooner the better! It’s again a matter of human resources within the committee: there is not a designated team in the committee that carefully checks all papers for formatting issues in the few days after the deadline. If someone in the community thinks this should be done, that person should volunteer to organize this effort.

    Papers go top down from program chairs to area chairs to reviewers, and then reviews go bottom up to program chairs. My guess is that the violation was found by the reviewers, who probably did their reviews very close to their review deadline. This is why it took so much time.

    Really, the most efficient process is that authors check their paper within submission time.


    Liked by 1 person

Comments are closed.