mikojan 6 days ago

Oh my god.. The horror.. Please do not let this be my future..

1
eddythompson80 6 days ago

The horror indeed, but I don't really see a way out of this. Was mainly curious to see how it would affect something like "Peer Review" though I suspect the incetives there are different so the processes might only shares the word "Review" without much baring on each other.

Regarding code reviews, I can't see a way out unfortunately. We already have github (and others) agents/features where you write an issue on a repo, and kick off an agent to "implement it and send a PR for the repo". As it exists today, every repo has 100X more issues and discussions and comments than it has PRs. now imagine if the barrier to opening a PR is basically: Open an issue + click "Have a go at it, GitHub" button. Who has the time or bandwidth to review that? That wouldn't make any sense either.

rjakob 6 days ago

Based on my experience, many reviewers are already using AI extensively. I recently ran reviewer feedback from a top CS conference through an AI detector, and two out of three responses were clearly flagged as AI-generated.

In my view, the peer-review process is flawed. Reviewers have little incentive to engage meaningfully. There’s no financial compensation, and often no way to even get credit for it. It would be cool to have something like a Google Scholar page for reviewers to showcase their contributions and signal expertise.

AStonesThrow 6 days ago

The only thing worse than an LLM for making stuff up and giving fake numbers is an LLM "Detector". They are so full of false positives and false negatives and bogus percentages, as to be actively harmful to human trust and academic integrity. And how do you follow up, to verify or falsify their results?

rjakob 6 days ago

Fair. Though in this case, it was obvious even without a detector.