Get Mystery Box with random crypto!

Hot news: https://ai.facebook.com/blog/large-language-model-ll | Data Science by ODS.ai 🦜

Hot news: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/

Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making LLaMA available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a LLAMA model card that details how we built the model in keeping with our approach to Responsible AI practices.

In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B. We release all our models to the research community.

Model card: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md

Paper: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/

Form to apply: https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform

Unfortunately, it's only for non-commercial purposes :(

"You will not, and will not permit, assist or cause any third party to:

a. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes ... "