Discussion about this post

User's avatar
Sarah Dean's avatar

Sounds like it will be an interesting semester! On the action side, some of the things you mention apply in totally deterministic environments (e.g. LQR looks the same even in the absence of process noise). And it's often possible to replace "assume Guassian noise" with "minimize an appropriate least squares objective" (e.g., for state estimation: https://slides.com/sarahdean-2/08-state-estimation-ml-in-feedback-sys?token=565bwizg#/12/0/3) -- of course, without a Gaussian model, there's no "deeper" motivation for why least squares is the "correct" objective to have.

On the static prediction side, I find the framing of Michael Kim's "Outcome Indistinguishability" helpful for thinking about where uncertainty comes from. I also like the philosophy paper that inspired the work: https://link.springer.com/article/10.1007/s11229-015-0953-4. It provides a nice taxonomy of interpretations of probability. (I made some summary slides of it here: https://slides.com/sarahdean-2/aipp-probability-time-individual-risk?token=vx-PDQk9)

Expand full comment
Misha Belkin's avatar

Are you planning to teach anything about LLMs, Ben?

Expand full comment
14 more comments...

No posts