🔥 Burn Fat Fast. Discover How! 💪

Wrote an article on the Medium about pushing fastText into Col | Neural Networks Engineering

Wrote an article on the Medium about pushing fastText into Colab.

Tl;dr: original binary fastText is too large for Colab.
We can shrink it, but it is a little tricky for n-gram matrix: we need to consider uniformness of collision distribution.

The final model takes 2Gb of RAM instead of 16Gb and 94% similar to the original model.

Code is also provided.