Thank you for becoming a reviewer for NIPS! Your help is a very valuable public service: the technical content of the program is largely determined through the efforts and comments of the reviewers. Below is a set of instructions (both editorial and mechanical) that will help you perform your reviews efficiently and effectively. We will also send a separate email with the detailed reviewing schedule.
All reviews must be entered electronically into the ConfMaster System for NIPS*2007. If you have trouble accessing this site, please contact the NIPS Workflow Manager. Reviewers may visit this site multiple times and revise their reviews as often as necessary before the reviewing deadline. When you were invited to become a reviewer, ConfMaster sent you an automated mail with your username and password. Feel free to change these. In case you forgot your user name and password, you can visit ConfMaster and select "Password forgotten?" to have your password reset and mailed to your email of record.
During the review period, you will probably get many emails sent from ConfMaster (e.g., those telling you your paper assignment). Please make sure emails from ConfMaster are not snagged by your spam filter!
By viewing the papers, you agree that the NIPS review process is confidential. Specifically, you agree not to use ideas and results from submitted papers in your work, research or grant proposals, unless and until that material appears in other publicly available formats, such as a technical report or as a published work; nor to distribute submitted papers nor the ideas in the submitted papers to anyone unless approved by the Program Chair.
The first step in the review process is for the reviewers to enter conflicts of interest with other authors. Due to the use of double-blind reviewing (see below), you will not get to see the authors for each paper, and therefore may not be able to determine (at least based on the title & abstract) the papers with which you have a conflict. Therefore, we have implemented a system that allows you to mark all authors with whom you have a conflict. To avoid having each reviewer go over a list of several thousand authors, each reviewer will see all and only authors who submitted a paper with keywords that overlap the reviewer's keywords. Thus, for example, if you marked "Bayesian RL" as one of your keywords, you will see any author who submitted a paper with "Bayesian RL" as a keyword. Make sure to go over this list carefully and mark any conflicts.
You should mark a conflict with: anyone who is or ever was your student or mentor; a current or recent colleague or close collaborator. In general, if in doubt, it's probably better to mark a conflict, in order to avoid the appearance of impropriety.Your own username should be automatically marked as a conflict, but sometimes the same person may have more than one account, in which case you should definitely mark your other accounts as a conflict as well. You do not need to mark all authors with whom you do not have a conflict: if you do not mark a conflict with an author, then we will assume that you do not have a conflict by default.
The first step in the review process is for the reviewers to bid on papers. The goal of the bidding process is to give the program and area chairs information about your reviewing preferences. Given the scope of the conference, getting accurate information about your reviewing expertise is critical to getting a good assignment, and a good assignment of papers to reviewers is the most important factor in obtaining high-quality reviews. Therefore, please make sure to enter your preferences into the system!
There are two factors that determine your preferences for papers. The first is based on the keywords that you should already have used to describe yourself within ConfMaster. If you have not entered this list, please do so right away. This list can be modified by clicking on "Edit user data." These keywords are used in two ways: they are used to determine the set of papers that you will get to bid on in the bidding stage; they also give a certain priority for assigning you papers whose keywords match yours.
The second factor is your own bids on specific papers. To bid on papers, please select "Apply for papers." You'll see a list of keywords on top; you should make sure to check "Use your stored keywords instead", as these are the papers that are likely to be assigned to you, and so it's important that you express your preferences for those papers. Now, click on the "Search" button. You'll see a list of papers to bid on. If you do not bid on a paper, by default, we will use a keyword-similarity score to determine your preference for that paper.
You can go through and look at individual paper details by clicking on the hourglass on the right-hand side of the table. These details include the title, keywords, and abstract of the paper. During the bidding phase, you will not be able to see the actual paper; the authors, of course, are never revealed (see Double blind reviewing, below). The possible values of the bids are ++ (very interested), + (interested), 0 (neutral), - (uninterested), and lightning bolt (have a conflict). Select the bids you want, and then click the "Submit" button at the bottom of the page to save your bids. By default, you'll see 20 papers per page; remember to go to the next page to bid on more. The actual assignment of papers to reviewers is performed by the program chairs and area chairs based on these as well as other factors and constraints. So, please don't expect to get just the papers you bid on. However, the more "strong" bids you provide, the more likely you are to get one of them rather than a collection of random papers.
Note that you will only have marked author conflicts only for authors in papers that use your stored keywords. If you choose to bid on other papers, be sure to look more carefully for conflict papers in that set (see below). To avoid revealing author information, papers with whose authors you have a conflict will still appear in the bidding stage, but will not be assigned to you. If you find additional papers with which you may have a conflict (e.g. a submission very similar to one of yours), or submissions whose authors you recognize and have a conflict with, please mark them as a conflict as well.
If you want to globally apply a bid value to all papers, click on "all" in the "view 20 | 50 | all per page" menu. Now, in the header of the table, click on the bid you wish to apply to all of your papers. Remember to click the "Submit" button at the page to save your bids.
You will get an announcement from ConfMaster when the bidding web page opens to accept bids; this will occur no later than Monday, June 11. Bidding will close by Friday, June 15.
This year, we continue to use double blind reviewing. Of course, the authors do not know the identity of the reviewers. (Needless to say, this also holds for authors who are on the program committee.) In addition, the reviewers do not know the identity of the authors. (The area chairs do know the author identities, to avoid accidental conflict of interests and to determine novelty).
We instituted double blind reviewing to help the reviewers and the authors. NIPS trusts in the judgment of its reviewers: the reviews are the dominant factor in the accept/reject decision. We realize that reviewers are human and may have unconscious positive or negative biases. To help them make a clearer decision on a paper, we conceal the authorship of the paper.
Of course, double blind reviewing is not perfect---by searching the Internet, a reviewer may discover (or think he/she may have discovered) the identity of an author. We encourage you not to attempt to discover the identities of the authors. However, if you have good reason to suspect that this work has been published in the past, you can go and search on the internet; but, we ask that you first completely read the paper before doing any Internet search. Also, based on the experience of other double-blinded conferences, we caution reviewers that the assumed authors may not be correct---multiple independent invention is common, and different groups build on each others' work.
If you believe that you have discovered the identity of the author, we ask that you explain how and why in the "Confidential comments to PC members" in your review (see below). This will help the (non-blind) program committee determine the novelty of the work. This will also help us determine how effective double blind reviewing is.
You may have noticed some authors have submitted archive files (tar.gz) that contain multiple files. Submission of additional material is allowed under the new NIPS rules: authors can submit up to 10 MB of material, which can contain proofs, audio, images, video, or even data or source code.
Your responsibility as a reviewer is to read and judge the main paper, which is the only thing published in the proceedings. It is purely optional for you to read or view the supplementary material. However, keeping in mind the space constraints of a NIPS paper, you may want to consider looking at the supplementary materials before complaining that the authors did not provide a fully rigorous proof of their theorem, or demonstrated qualitative results on only a small subset of instances.
If you are reviewing such a submission, the main paper starts with "paper_", and all other supplementary material has a different prefix. On UNIX computers, you can use gunzip followed by tar -xvf to unpack the archives. On Windows computers, WinZip should be able to unpack tar.gz files.
By June 25, the area chairs should have assigned you a number of papers to review. Again, you will get an email announcement from ConfMaster. You can access your paper list by clicking on "View assigned papers."
You should attempt to download, view, and print the assigned papers promptly even if you do not intend to review them right away. If problems arise with a particular paper, please contact the workflow manager immediately. Email messages to the workflow manager should use informative subject headings, such as "NIPS*2007 Paper 333 Printing Problem."
If, when you examine your papers, you notice papers that do not abide by the NIPS submission guidelines, please inform the program chairs immediately. Such papers include papers: that are over the 8 page limit; that use a format that differs significantly from the NIPS format; or that do not abide by the blind reviewing guidelines (as specified in the Instructions for Authors), and thereby reveal the authors' identities to the reviewers. As announced this year, these papers will be rejected without review. In order that this policy be applied uniformly and fairly, please do not review these papers without consulting with the program chairs.
Initial reviews should be completed and entered into ConfMaster by July 13. Please complete your reviews by then. The high quality of NIPS depends on having a complete set of reviews for each paper. Reviewer scores and comments provide the primary input used by the program committee to judge the quality of submitted papers. Far more than any other factor, reviewers determine the scientific content of the conference.
The reviews are entered into ConfMaster by clicking on the "R" circle for a paper in your list of assigned papers. You'll be asked to give a score and confidence for each paper (see Quantitative Evaluation section, below). Please support your score in the "Comments to author(s)" text box (see below). After you are finished with your review, remember to click on the "Submit" button at the bottom of the page: otherwise, your work will be lost. You can come back and edit your review until the review deadline.
Reviewer comments have two purposes: to provide feedback to authors, and to provide input to the program committee. Reviewer comments to authors whose papers are rejected will help them understand how NIPS papers are rated, and how they might improve their submissions in the future. Reviewer comments to authors whose papers are accepted will help them improve the final conference proceedings. Reviewer comments to the program committee are the basis on which accept/reject decisions are made.Your comments are seen by the area chair and the other 2 reviewers. Only the area chair knows your identity; not the other reviewers, and not the authors. So, you can feel free to express your honest (although polite!) opinion.
Overview
You'll be asked to give a score and confidence for each paper (see Quantitative evaluation section, below). Please support your score in the "Comments to author(s)" text box.
Your written review should begin by summarizing the main ideas of each paper and relating these ideas to previous work at NIPS and elsewhere. While this part of the review may not provide much new information to authors, it is invaluable to members of the program committee. You should then discuss the strengths and weaknesses of each paper, addressing the criteria described in the Qualitative evaluation section, below, and in the NIPS paper evaluation criteria document. As explained in this document, we are particularly looking to improve the balance between theory and applications. Please read the review criteria and use those to guide your decisions. It is tempting to include only weaknesses in your review. However, it is important to also mention and take into account the strengths, as an informed decision needs to take into account both. It is particularly useful to include a list of arguments pro and con acceptance. If you believe that a paper is out of scope for NIPS, please corroborate this judgment by looking at the list of topics in the call for papers. Finally, please fill in the "Summary of review"---this should be a short 1-2 sentence summary of your review.
Importantly, reviewer comments should be detailed, specific and polite, avoiding vague complaints and providing appropriate citations if authors are unaware of relevant work. As you write a review, think of the types of reviews that you like to get for your papers. Even negative reviews can be polite and constructive!
If you have information that you wish only the program committee to see, you may fill in the "Confidential comments to PC members" box. The confidential comments to the program committee have many uses. Reviewers can use this section to make recommendations for oral versus poster presentations, to make explicit comparisons of the paper under review to other submitted papers, and to disclose conflicts of interest that may have emerged in the days before the reviewing deadline. You can also use this section to provide criticisms that are more bluntly stated.
Quantitative Evaluation
Reviewers give a score of between 1 and 10 for each paper. The program committee will interpret the numerical score in the following way:
10: Top 5% of accepted NIPS papers, a seminal paper for the ages.
I will consider not reviewing for NIPS again if this is rejected.
9: Top 15% of accepted NIPS papers, an excellent paper, a strong accept.
I will fight for acceptance
8: Top 50% of accepted NIPS papers, a very good paper, a clear accept.
I vote and argue for acceptance
7: Good paper, accept.
I vote for acceptance, although would not be upset if it were rejected.
6: Marginally above the acceptance threshold.
I tend to vote for accepting it, but leaving it out of the program would be no great loss.
5: Marginally below the acceptance threshold.
I tend to vote for rejecting it, but having it in the program would not be that bad.
4: An OK paper, but not good enough. A rejection.
I vote for rejecting it, although would not be upset if it were accepted.
3: A clear rejection.
I vote and argue for rejection.
2:.A strong rejection. I'm surprised it was submitted to this conference.
I will fight for rejection
1: Trivial or wrong or known. I'm surprised anybody wrote such a paper.
I will consider not reviewing for NIPS again if this is accepted
Reviewers should NOT assume that they have received an unbiased sample of papers, nor should they adjust their scores to achieve an artificial balance of high and low scores. Scores should reflect absolute judgments of the contributions made by each paper.
Confidence Scores
Reviewers also give a confidence score between 1 and 10 for each paper. The program committee will interpret the numerical score in the following way:
9-10: |
The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature. |
7-8: |
The reviewer is confident but not absolutely certain that the evaluation is correct. It is unlikely but conceivable that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. |
5-6: |
The reviewer is fairly confident that the evaluation is correct. It is possible that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. Mathematics and other details were not carefully checked. |
3-4: |
The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper. |
1-2: |
The reviewer's evaluation is an educated guess. Either the paper is not in the reviewer's area, or it was extremely difficult to understand. |
Qualitative Evaluation
All NIPS papers should be good scientific papers, regardless of their specific area. We judge whether a paper is good using 4 criteria; a reviewer should comment on all of these, if possible:
Quality |
Is the paper technically sound? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful (and honest) about evaluating both the strengths and weaknesses of the work. |
Clarity |
Is the paper clearly written? Is it well-organized? (If not, feel free to make suggestions to improve the manuscript.) Does it adequately inform the reader? (A superbly written paper provides enough information for the expert reader to reproduce its results.) |
Originality |
Are the problems or approaches new? Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Is related work adequately referenced? We recommend that you check the proceedings of recent NIPS conferences to make sure that each paper is significantly different from papers in previous proceedings. Abstracts and links to many of the previous NIPS papers are available from http://books.nips.cc |
Significance |
Are the results important? Are other people (practitioners or researchers) likely to use these ideas or build on them? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach? |
From July 18 through July 23, the authors of the paper will have a chance to submit feedback on their reviews. This is an opportunity to correct possible misunderstandings about the contents of the paper, or about previous work. Authors may point out aspects of the paper that you missed, or disagree with your review.
Last year, many authors felt that their comments were ignored in the final decision. While it is perfectly legitimate that many author comments will not change the final evaluation of a paper, it is important to convey to the authors that their comments were taken into account. Therefore, please read each rebuttal carefully and keep an open mind: Do the authors' comments make you change your mind about your review? Have you overlooked something? If you disagree with the authors' comments, please update your review to explain why (even if briefly). To encourage the author responses to be considered, we have introduced a feature into ConfMaster where each reviewer can assert that they have read an author response and adjusted their review accordingly.
From July 23 through August 3, the area chairs will lead a discussion via the website and try to come to a consensus amongst the reviewers. The discussion will involve both marginal papers, trying to reach a decision on which side of the bar they should fall, and controversial papers, where the reviewers disagree. Many papers fall into these categories, and therefore this phase is a very important one. While engaging in the discussion, recall that different people have somewhat different points of view, and may come to different conclusions about a paper: do the other reviewers' comments make sense? Would you change your mind given what the others are saying? Again, it is good to keep calm and stay open.
Reviewer consensus is valuable, but is not mandatory. If the reviewers do come to a consensus, the program committee takes it very seriously: only rarely a unanimous recommendation would be overruled. However, we do not require conformity: if you think the other reviewers are not correct, you are not required to change your mind.
The program committee will meet electronically during the last two weeks in August, and the final results will be announced on or before September 7.
Where possible, reviewers should identify submissions that are very similar (or identical) to versions that have been previously published, or that have been submitted in parallel to other conferences. Such submissions are not appropriate for NIPS. Exceptions to this rule are the following:
(1) Shorter write-ups of longer papers that have been recently (i.e. in the current calendar year) submitted to journals.
(2) Papers whose content is currently under review elsewhere or has very recently been published, but only in venues that are particularly inaccessible to the NIPS audience. In these cases, the NIPS submission should involve a substantial revision of the original paper, in a way that specifically highlights and expands on the relevance of the work to the NIPS community. The differential contribution between the original paper and the NIPS submission will be a factor in the decision. Authors of such papers must anonymously cite the earlier work, and include an anonymized copy in the supplementary materials.
Examples of conferences that are too close to NIPS and where double submission will not be considered: ICML, UAI, COLT, ICCV, ECCV, CVPR, ECML, AI&STATS, KDD, ICANN, IJCNN, WCNN, SODA, FOCS, STOC, ACL, EMNLP.