Discussion about this post

User's avatar
Nick's avatar

I've been studying approximation algorithms lately and was surprised to find many tricks used to relax discrete symbols to enable continuous optimization are very similar to the representations used in ML. For example, using unit vectors or simplexes. Perhaps the theory of data representation is simply a study of convex relaxations of integer programs.

Expand full comment
Ziyuan Zhao's avatar

You said that “But everyone has a target for their final representation.” — I wonder what would you say about the unsupervised learning community. Also I thought the techniques for representation learning like Fourier and wavelet transforms long existed before modern machine learning. Hence, I wonder if prediction tasks are all that we need to learn good representations, or they are just convenient because we can then apply decision and optimization methods as taught in your class.

Expand full comment
15 more comments...

No posts