Very interesting lecture! Looking forward to your next one on interpolation. I wonder if, intuitively speaking, if minimizing interpolation error should require us to find computable features that reduce variance in the outcome conditioned on the features, to connect with your derivation in this actuarial case.
You also mentioned that in real world we often have a prediction problem in the middle between the actuarial and interpolative extreme. Let’s assume, abstractly, our observations are casually generated by some latent variables of the system that the prediction would be considered actuarial conditioned on the latent variables, but interpolative conditioned on the observations. If we optimize our model by reducing prediction error using the observations, I wonder how likely is it we obtain features similar to the latent variables? In biology, the variance in your actuarial sense is the norm but I sometimes wonder if, with the overwhelming interest to train deep neural nets to interpolate data in our field nowadays, we are losing the grasp on mechanistically (whatever that means) understanding this inherent variability.
There's an interesting literature on the "hot hand" effect, initially seen as refuting it, but subsequently more supportive. I've never looked in detail
Very interesting lecture! Looking forward to your next one on interpolation. I wonder if, intuitively speaking, if minimizing interpolation error should require us to find computable features that reduce variance in the outcome conditioned on the features, to connect with your derivation in this actuarial case.
You also mentioned that in real world we often have a prediction problem in the middle between the actuarial and interpolative extreme. Let’s assume, abstractly, our observations are casually generated by some latent variables of the system that the prediction would be considered actuarial conditioned on the latent variables, but interpolative conditioned on the observations. If we optimize our model by reducing prediction error using the observations, I wonder how likely is it we obtain features similar to the latent variables? In biology, the variance in your actuarial sense is the norm but I sometimes wonder if, with the overwhelming interest to train deep neural nets to interpolate data in our field nowadays, we are losing the grasp on mechanistically (whatever that means) understanding this inherent variability.
There's an interesting literature on the "hot hand" effect, initially seen as refuting it, but subsequently more supportive. I've never looked in detail