Discussion about this post

User's avatar
Michael's avatar

these days as a PhD student, publishing your first NeurIPS/ICLR/ICML paper is like getting your SAG card or something. it makes you eligible for big tech internships. that’s probably worth a couple hundred thousand $ in expectation (low estimate).

those incentives are just too strong, it’s inevitable that things will become strange and distorted. adding more and more bureaucratic process won’t actually help, except perhaps by increasing desk rejections and reducing submissions on the margin.

I don’t really see a way to stop it. It’s public record who publishes at NeurIPS and there’s no way to stop third parties from using that information to make hiring decisions. But the end result is that we’ve all been conscripted to be first-pass recruiters for Google and Meta.

Expand full comment
Seth's avatar

I'm going to steelman the checklist. If LLM outputs are variable by nature, isn't it all the more important to... measure and describe that variability? My impression is that in CS, many people are not used to thinking about variability and uncertainty quantification. In that case, a checklist reminding people about uncertainty quantification seems reasonable.

Of course, an appropriate checklist item in this case is, "hey, did you remember to think about outcome variability and/or uncertainty quantification?"

Expand full comment
25 more comments...

No posts