ICCV 2025 Area Chair Experience

6 minute read

Published:

ICCV 2025 Area Chair Experience

A picture of a high bridge, with ocean to the right side Bixby Creek Bridge, Monterey Bay, CA. Photo taken after AC duties, Sep 2025

 

I was invited to serve as an Area Chair (AC) at the International Conference on Computer Vision (ICCV) 2025 , which was my first such experience. I oversaw the review process for a batch of 20 papers. After authors spent days preparing their rebuttals, how many of those papers sparked a meaningful discussion among reviewers?

A surprising statistic: out of 20 papers in my stack, there were active post-rebuttal discussions on only two.

This experience gave me a new perspective on the role of ACs and the overall review process. I’m sharing my thoughts to help demystify the process for authors, reviewers, and perhaps new ACs. Previously, I wrote a blogpost about reviewing for computer vision conferences in 2023.

Three Realities of Peer Review

Instead of a chronological report, I want to focus on a few key observations from my experience as an AC.

1. The Post-Rebuttal Discussion is Often an Illusion

My experience (as an AC now, and as a reviewer for the last few years) has been that reviewer engagement follows a bell curve: a majority would participate in the discussion if reminded, some are reluctant or non-responsive, and a select few are proactive, starting and leading discussions.

At ICCV, I had to send many reminders to get the discussion started. Given how much time and effort this phase goes into from both authors and ACs, I am increasingly of the opinion that the rebuttal and post-rebuttal discussion phase are not as useful as we hope if meaningful engagement is so rare. Notable researchers, like Dima Damen, have raised similar concerns.

2. The “Soft Cap” on Acceptance Rates Exists

For computer vision conferences like ICCV and CVPR, the standard process is to form a triplet of ACs, two ACs and a senior AC, who work together to finalize decisions. My AC triplet meeting was held on a Zoom call on a weekend with ACs in different timezones, and the process managed by the senior AC was smooth and efficient.

This was the first time I encountered a loose guideline on acceptance percentage. While there were no strict recommendations, the acceptance rate was being tracked and used as a factor in our discussions. The scrutiny was asymmetrical: a batch having more than 30% acceptance might be examined carefully, but batches with much lower acceptance rates were not reviewed with the same lens. This asymmetry might create an implicit pressure to find reasons to reject borderline papers.

3. Bidding for Reviewers is a Game of Strategy

Assigning reviewers is a major task that requires quite some effort. Most CV/ML conferences adopt a bidding system. The system at ICCV 2025 was interesting: for each paper, an AC can bid for any number of reviewers and assign a score. One helpful feature was that ACs could see how many times a reviewer had already been bid for other papers, giving a signal about the likelihood of getting that reviewer assigned.

I saw and bid for several well-known researchers in the reviewer pool - unfortunately, none of my papers were assigned these reviewers, perhaps because there were so many bids on them. On the other hand, I found some excellent reviewers who had very few bids. This reveals a critical strategy for ACs: don’t just bid on the most sought-after names. It’s vital to diversify and find suitable reviewers (based on their publications) who have fewer bids. They are often just as good, if not better.

My Takeaways for Navigating the System

Based on these observations, here are my suggestions.

For Authors: Aim for a Champion, Don’t Rely on the Rebuttal

First, it is important to understand the nature of reviewing: most reviewers will do the basic work, while only a few will dig in deeper and engage in discussions. From my experience, a typical accepted paper has at least one excited reviewer. If all reviews are lukewarm, around the borderline, the total weaknesses pointed out by all reviewers usually add up to a rejection.

Therefore, aim to excite at least one reader. While it is hard to pinpoint exactly how to positively influence reviewers, truly novel, unconventional, and bold work usually inspires people.

And do not rely on the rebuttal. Given very limited discussion in most cases, it seems the community is reaching an understanding that the rebuttal is not worth the time and effort if initial reviews are bad. It is unlikely to turn around negative reviews unless there is already one detailed, positive review to build upon. As evidence of this trend, six of the 20 papers in my batch were withdrawn after the review stage!

For Reviewers: Your Initial Review Carries Immense Weight

Because discussion is so rare, your initial review is often the final word. Due diligence and being upfront are important for long-term trust. Since ACs see the names of reviewers, it’s natural that opinions will be formed based on the quality of the reviews and discussion (or lack thereof).

Personally, I am not a fan of the template response, “the rebuttal doesn’t address my concerns; I will keep my score,” which is being used exceedingly. If discussion isn’t going to happen, the initial review must be thorough enough to stand on its own.

A Concluding Thought

The peer-review process is managed by a community of volunteers under immense pressure. The issues I observed, lack of discussion and reviewer assignment challenges, are symptoms of a system under stress. Still, the commitment from this community is what makes the process work, even at this massive scale. My goal in sharing these thoughts is to provide a more transparent view into the process, hoping it helps us all participate more effectively.