Discussion about this post

User's avatar
Michael A. Alcorn's avatar

You might like this blog post of mine from 2017: "Are Linear Models *Actually* 'Easily Interpretable'​?" --> https://www.linkedin.com/pulse/linear-models-actually-easily-interpretable-michael-a-alcorn/. I focused specifically on how tempting it is for people to interpret linear models causally even when it's completely unjustified.

Expand full comment
Maxim Raginsky's avatar

If you want, you can even drill down to the transistor level to marvel at all the layers of physical nonlinearity that have to be composed before you get to the abstraction of digital logic, which you can wrap in further layers of abstraction to get to Python code that you use to implement your linear predictor.

Expand full comment
5 more comments...

No posts