Wrote an article on the Medium about pushing fastText into Col | Neural Networks Engineering
Wrote an article on the Medium about pushing fastText into Colab.
Tl;dr: original binary fastText is too large for Colab. We can shrink it, but it is a little tricky for n-gram matrix: we need to consider uniformness of collision distribution.
The final model takes 2Gb of RAM instead of 16Gb and 94% similar to the original model.
Authored channel about neural networks development and machine learning mastering. Experiments, tool reviews, personal researches. #deep_learning. #NLP. Author @generall93...