Dear area chairs:
[TL;DR: We describe how the first part of reviewer workflow will work (esp. wrt the Toronto system) and other fine points.]
Thank you so much for accepting that charge for being an area chair! Both of us feel that the area chairs are the key personnel in ensuring scientific quality, rigour and insight in the technical program. At this point, if you log into http://www.softconf.com/acl2017/ with your START V2 login, you should see that you have a track chair privileges (Min is coordinating the softconf configuration for the event, so if you see any problems with the configuration for your account, please notify him). Also, as we have written before, we would like to make all correspondence open to the public (where not confidential). We encourage all of you to comment these broadcast policies that we are considering either here on the blog or in Facebook, to support our endeavour to enhance transparency.
Also, to clarify, area chairs are allowed to submit to the conference and to the same area. It would be unfair otherwise, as many of you have students or junior colleagues whom you are mentoring that may be submitting to the area. As per previous ACL events, any conflicts of interest (COIs) for a submission where one or more area chair are involved need to be declared, so that another (non-COI) chair within the area can be assigned to manage the submission. In the case that all area chairs have a COI, the paper will be routed to a different area.
Now, onto the heart of this post: the reviewing process. The dates mentioned here are also on the AC calendar that we posted in the previous post: https://calendar.google.com/calendar/ical/arq0ig9b7dvhvpnv1n93bbluv0%40group.calendar.google.com/public/basic.ics
First, we are planning to start with the reviewer list from the previous NAACL 2016 (co-chaired by Ani Nenkova and Owen Rambow) and ACL 2016 (co-chaired by Noah Smith and Katrin Erk) as a starting point. Your job will be to make additions to this list of reviewers, especially considering your particular communities and those communities beyond the other chairs in your area.
Once this initial list is complete by the proposed deadline of 5 January (yes, we know in the middle of holiday season), Regina and Min will be sending out the first batch of invitations in the START system centrally to all proposed reviewers [you do not have to do the invitation itself — the invitation email will be sent out in batch by us through START’s interface]. As reviewers accept and decline, you will have to revise the reviewer list to suggest new reviewers, so that we can comfortably cover the workload for all expected incoming submissions papers. We hope to recruit sufficient reviewers such that each one only needs to review at most four papers — as many have already pointed out, this will be difficult given that the review period has been substantially reduced to two weeks.
To give you an idea of how many submissions to expect into your area, we’ve pulled statistics from the previous ACLs, given in the table at the end of this post. Note that the identity of the areas is slightly different, and that because we have a joint deadline, we expect slightly fewer submissions overall (multiplied below as 80% of the total past submissions — for both long and short papers).
Reviewers who accept the invitation will be asked to use the Toronto paper matching system (described at http://papermatching.cs.toronto.edu/, but currently down due to significant hardware failures; Updated, now working but at a new address: http://torontopapermatching.org/webapp/profileBrowser/login/) to build a registered reviewer profile of their expertise by uploading PDF versions of their past publications. The Toronto system supports both bulk upload by providing webpage where the PDF files can be retrieved, or individual uploads of PDFs. Once a profile is created, the profile can also be edited to exclude past publications where the reviewer no longer has interest. This registered reviewer profile exists beyond the scope of ACL 2017, and can be adopted by other ACL events, or other conferences (for example, it is already integrated into the Microsoft Conference Management System) to enhance paper-reviewer matching.
We encourage all of you to try to create your own profile, if you do not have one already. Familiarising yourself with the system may help you troubleshoot the process for reviewers in your area who are having difficulty creating profiles. It takes just a few minutes for people who already have some form of a webpage listing their publications.
We will still be calling for author-initiated bids on papers to assist you in making the paper-reviewer assignment (9 to 12 Feb, three days after the paper submission deadline). Assuming that it is available for use by then, the Toronto system will generate a normalized score (between 0 and 1) for each prospective paper-reviewer pairing. We are working with softconf (Rich Gerber) to integrate the output of the Toronto system for area chair use. Likely, area chairs will see the calculated paper-reviewer matching scores, in descending order, as part of the information for each paper. Unfortunately, due to the tight schedule in integrating the system, we do not plan to make the personalised paper matching scores available to reviewers during the bid process.
Let us be clear: the Toronto system, when used, only supplies assignment recommendations, to provide another source of evidence to assist area chairs in assigning papers. The final assignment of a paper to a reviewer rests in your hands.
The review period is two weeks (13 to 27 February). It is impossible to guarantee that all reviews will be in, in a conference of this magnitude. You will have to work closely with late reviewers to guarantee that they can complete late reviews, as soon as possible so that the author response period can start on time. In the intervening one-week period (6 to 13 March), you will need to identify papers that merit discussion among the ACs and among yourselves. We encourage you to come up with a preliminary classification of papers into sure reject, sure accept and discussion needed (perhaps the bulk of papers) even before authors respond to reviews. The three-day author response period begins afterwards (13 to 15 March). Note that the authors have a text box to communicate directly with you as area chairs, in the case that the authors feels that they’re worth has been misinterpreted by reviewers. Please do address these comments in the needed meta-review, being aware of the sensitivity needed to craft an appropriate response.
You will have approximately one week afterwards to generate your final rankings and recommendations for accept/reject, and presentation style (poster, oral). Note that we are not going to add meta-reviewing to the duties necessary for all papers, as we expect dialogue among the ACs (with us too) will help to resolve these cases. We will then generate the final programme for the conference in consultation with you, to be disseminated on 30 March.
Please note that we are still recruiting area chairs, as we are inviting a second round to replace colleagues who have declined the invitation. Once the set of area chairs is finalised, we will publish the final statistics on the open call for area chairs, inclusive of the nominations. We know some members of the community are interested in how our open call fared.
Appendix: Submission Statistics (with approximate projections)
(culled from ACL Q3 reports and the ACL Wiki)
Please note, that the statistics for the upcoming submissions are approximate and that we have not finished recruiting area chairs — the list just reflects the state of recruiting at the moment. We have consolidated certain areas together to reflect our opinion that broader areas lessen the difficulty of area selection, more about these changes below. As always we welcome your comments, especially critical and constructive ones.
For the table below, there are two important points in our estimates. For the projected number of submissions, we used the maximum of the 2016 and 2014 submissions and then applied a 0.9 multiplier (assuming that the joint deadline cuts down on the total number of submissions from the previous conference in which the deadlines were staggered). For the projected number of reviewers, we hope to recruit enough reviewers to give each reviewer a load of 3-4 papers (hence the multiplier for each reviewer was 3.5).
(Last updated: 4 Jan)
ACL 2017 Areas | Projected # of reviewers | Current load per chair | Current # Area Chairs | 2017 Projected Submissions | Historical 2016 Submissions (in terms of 2017 areas) | Historical 2014 Submissions (in terms of 2017 areas) |
Biomedical1 | 9 | 5.0 | 2 | 10 | ||
Cognitive Modelling and Psycholinguistics | 19 | 21.0 | 2 | 22 | 23 | 25 |
Dialogue Interactive Systems | 22 | 8.3 | 3 | 25 | 28 | 18 |
Discourse Pragmatics | 47 | 18.0 | 3 | 54 | 60 | 42 |
IE, QA, Text Mining and Applications2 | 229 | 29.7 | 9 | 267 | 272 | 2974 |
Machine Learning | 53 | 12.2 | 5 | 61 | 68 | 54 |
Machine Translation | 114 | 26.6 | 5 | 133 | 94 | 148 |
Multidisciplinary and Others | 47 | 27.0 | 2 | 54 | 61 | |
Multilinguality | 23 | 13.0 | 2 | 26 | 34 | 43 |
Phonology, Morphology, and Word Segmentation | 25 | 14.5 | 2 | 29 | 33 | 28 |
Resources and Evaluation | 50 | 29.0 | 2 | 58 | 65 | 59 |
Semantics | 139 | 27.0 | 6 | 162 | 180 | 139 |
Sentiment Analysis and Opinion Mining | 81 | 31.3 | 3 | 94 | 105 | 584 |
Social Media | 54 | 20.7 | 3 | 62 | 69 | |
Speech | 15 | 8.5 | 2 | 17 | 7 | 19 |
Summarization and Generation3 | 60 | 23.0 | 3 | 69 | 77 | 50 |
Tagging Chunking Syntax and Parsing | 64 | 18.5 | 4 | 74 | 81 | 83 |
Vision, Robotics, and Grounding | 16 | 9.0 | 2 | 18 | 20 |
Area Footnotes:
1 – New for this year.
2 – Combines previous areas of IE, QA, IR, NLP Applications and Document Analysis
3 – Combines previous areas of Summarization and Generation areas.
4 – Approximate, areas don’t map 1-to-1.
You might want to send an email to encourage ACs to subscribe to this blog, using the button at the upper right of this page.
However, I think they will only be notified by email of new *posts*. To get email notification of new *comments* such as this one, one must post a comment and check off “notify me of new comments via email.”
LikeLiked by 1 person
Regina tells me that each reviewer will be assigned to a single area, as is traditional for ACL.
TPMS (the Toronto system)will then be run once per area to suggest an assignment of reviewers within that area.
So TPMS will not be run in its usual mode for the conference as a whole.
LikeLiked by 1 person
Yes, that’s right. We are using the TPMS as a suggestion system, which will output a normalized matching score that the area chairs can use as input to the (manual) assignment of papers to reviewers. There’ll still be the human touch for each and every paper assignment. Hope that helps!
LikeLike
Thank you for this transparency effort, I think this is an important step and I hope it will be maintained and developed in the future.
A question for the multidisciplinary area: it is listed here, but not on the CFP on the ACL Web site… Do it mean it is the MISC category for the conference?
Which criteria should we (are usually) apply to decide if an article really belongs here?
Happy new year!
LikeLiked by 1 person
Great question, Karën! The Multidisciplinary area is new (and indeed a renaming of “Others”, to be more in line with our vision of cross- and multi-disciplinary work. As one of two area chairs for this area, we welcome your input in defining what this area means. We have specifically chosen you (along with Michael) to helm this area due to the cross-disciplinary nature of some of your work — we hope you’ll use this edge to rally both participation by prospective authors and in recruiting reviewers!
We’ll be issuing a new CFP in line with the new areas soon!
LikeLike
Thanks.
As for the reviewers, where is the preliminary list (if I want to recruit, I have to know how many people and if they already are listed or not)?
LikeLiked by 1 person
Great question! We will be disseminating it to you in a separate email attachment. The list consolidates about 1200+ members of the community that have reviewed for either ACL or NAACL 2016.
I will be pulling your suggestions from your individual “Manage List” listings in the your track-based committee scratchpads later today before sending the attachment to you.
LikeLike
Also, the Toronto paper matching system is still down and time flies… it will be difficult for us to test it before the reviewers do. Any news from that front?
LikeLiked by 1 person
Unfortunately, alas no. We will let everyone know when the system is stable. We will have to default back to standard ACL paper bidding if TPMS does not come online with sufficient clearance time.
LikeLike
Nice blog! It might be useful to extract the important dates out of the text and highlight to area chairs. Either put at the top of the blog or send an email.
LikeLike
Great idea. We’ve added it to the outgoing mail batch!
LikeLike