Meet the CLIP (Contrastive Language – Image Pre-training) - ne | Big Data Science
Meet the CLIP (Contrastive Language – Image Pre-training) - new Neural Net from OpenAI: it can be instructed in natural language to perform a great variety of classification benchmarks, without directly optimizing for the benchmark’s performance, similar to the “zero-shot” capabilities of GPT-2 and GPT-3. CLIP is based on zero-shot transfer, natural language supervision, and multimodal learning to recognize a wide variety of visual concepts in images and associate them with their names. Read more where you can use this unique ML-model https://openai.com/blog/clip/
Big Data Science channel gathers together all interesting facts about Data Science. For cooperation: a.chernobrovov@gmail.com. 💼 — https://t.me/bds_job — channel about Data Science jobs and car...