Get Mystery Box with random crypto!

Data Science by ODS.ai 🦜

Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜 D
Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜
Channel address: @opendatascience
Categories: Technologies
Language: English
Subscribers: 51.62K
Description from channel

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Ratings & Reviews

2.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

1

1 stars

1


The latest Messages

2024-03-10 09:34:10
LLM models are in their childhood years

Source.
15.1K views06:34
Open / Comment
2024-02-01 23:54:09
14.7K views20:54
Open / Comment
2023-10-02 08:25:01
Well, AI can learn that humans might be deceiving.

Upd: as our readers noted, post originally was written by Denis here.
But then Yudkowski retweeted and it was spread on X.
13.6K views05:25
Open / Comment
2023-09-27 13:50:49 Here is very interesting notes about how behaves generation of stable diffusion trained on different datasets with the same noise. Seems very contrintuitive!

https://twitter.com/mokadyron/status/1706618451664474148
13.1K views10:50
Open / Comment
2023-09-14 20:22:31
Introducing Würstchen: Fast Diffusion for Image Generation

Diffusion model, whose text-conditional component works in a highly compressed latent space of images

Würstchen - это диффузионная модель, которой работает в сильно сжатом латентном пространстве изображений.

Почему это важно? Сжатие данных позволяет на порядки снизить вычислительные затраты как на обучение, так и на вывод модели.

Обучение на 1024×1024 изображениях гораздо затратное, чем на 32×32. Обычно в других моделях используется сравнительно небольшое сжатие, в пределах 4x - 8x пространственного сжатия.

Благодаря новой архитектуре достигается 42-кратное пространственное сжатие!

HF: https://huggingface.co/blog/wuertschen

Paper: https://arxiv.org/abs/2306.00637

Docs: hhttps://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen

Demo: https://huggingface.co/spaces/warp-ai/Wuerstchen

ai_machinelearning_big_data
13.6K views17:22
Open / Comment
2023-06-08 07:50:24 ​​StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners

In a ground-breaking exploration of visual representation learning, researchers have leveraged synthetic images produced by leading text-to-image models, specifically Stable Diffusion, achieving promising results. The study uncovers two key insights - firstly, when configured correctly, self-supervised methods trained on synthetic images can match or even outperform those trained on real images. This suggests an exciting avenue for efficient and effective representation learning, reducing the need for extensive real image datasets.

Secondly, the researchers have devised a novel approach called StableRep, a multi-positive contrastive learning method that treats multiple images, generated from the same text prompt, as mutual positives. The compelling finding is that StableRep, trained solely with synthetic images, outperforms representations learned by prominent methods such as SimCLR and CLIP, even when these used real images. In a striking demonstration, when language supervision is added, StableRep trained with 20M synthetic images outperforms CLIP trained with a whopping 50M real images. These findings not only underscore the potential of synthetic data but also pave the way for more efficient, large-scale visual representation learning.

Paper link: https://arxiv.org/abs/2306.00984

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-stablerep

#deeplearning #cv #nlp #stablediffusion #texttoimage #syntheticdata
3.2K views04:50
Open / Comment
2023-06-05 07:53:43 ​​The effectiveness of MAE pre-pretraining for billion-scale pretraining

Revolutionizing the current pretrain-then-finetune paradigm of computer vision, this research has introduced an innovative pre-pretraining stage. Utilizing the Masked Autoencoder (MAE) technique for model initialization, this pre-pretraining strategy scales with the size of both the model and the data. This makes it an ideal tool for training next-generation foundation models, even on the grandest scales.

The robustness of our pre-pretraining technique is demonstrated by consistent improvement in model convergence and downstream transfer performance across diverse model scales and dataset sizes. The authors measured the effectiveness of pre-pretraining on a wide array of visual recognition tasks, and the results have been promising. The ielargest model achieved unprecedented results on iNaturalist-18 (91.3%), 1-shot ImageNet-1k (62.1%), and zero-shot transfer on Food-101 (96.0%), underlining the tremendous potential of proper model initialization, even when handling web-scale pretraining with billions of images.

Paper link: https://arxiv.org/abs/2303.13496

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-mae-pretrain

#deeplearning #cv #pretraining #selfsupervisedlearning
2.5K views04:53
Open / Comment
2023-06-02 17:33:36
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM (Salesforce)

The authors we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. CodeTF is designed with a unified interface to enable rapid access and development across different types of models, datasets and tasks. The library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes.
3.9K views14:33
Open / Comment
2023-06-01 12:29:00 ​​QLoRA: Efficient Finetuning of Quantized LLMs

Thia paper introduces QLoRA, a novel finetuning approach that decreases memory usage significantly, while maintaining impressive performance. Imagine this - a 65 billion parameter model finetuned on a single 48GB GPU, while preserving full 16-bit task performance. This method involves backpropagating gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters, a method that opens up new frontiers in machine learning. The icing on the cake is their high-performing model family, Guanaco, which trumps all previously released models on the Vicuna benchmark, achieving a staggering 99.3% of the performance level of ChatGPT with just 24 hours of finetuning on a single GPU.

The study also unveils several innovative techniques to conserve memory without compromising performance. These include 4-bit NormalFloat (NF4), an innovative data type that is theoretically optimal for normally distributed weights, double quantization for average memory footprint reduction, and paged optimizers to handle memory spikes. The QLoRA approach was applied to finetune more than 1000 models, leading to a detailed analysis of instruction following and chatbot performance across various model types and scales. The results affirm that QLoRA finetuning on a small, high-quality dataset yields state-of-the-art results, even with smaller models than previously used. A notable finding is that GPT-4 evaluations offer a cost-effective alternative to human evaluation. All models and code, including CUDA kernels for 4-bit training, have been released by the researchers.

Paper link: https://arxiv.org/abs/2305.14314
Code link: https://github.com/artidoro/qlora
CUDA kernels link: https://github.com/TimDettmers/bitsandbytes

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-qlora
#deeplearning #nlp #llm #quantization
4.4K views09:29
Open / Comment
2023-05-30 09:07:49 ​​Chain of Hindsight Aligns Language Models with Feedback

AI language models are becoming a major part of our digital world. The challenge, however, lies in aligning these models with human preferences to be genuinely useful and valuable. Current methods, although successful in many ways, have limitations - they are either inefficient in utilizing data or depend heavily on challenging reward functions and reinforcement learning.

Here comes "Chain of Hindsight," an exciting, novel technique inspired by human learning mechanisms. It can learn from any form of feedback, even transforming it into language for fine-tuning the model. This approach conditions the model on a sequence of model generations paired with feedback, helping it learn to correct negative attributes or errors. It is significantly outperforming previous methods, particularly showing major strides in summarization and dialogue tasks.
Paper link: https://arxiv.org/abs/2302.02676

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-coh
#deeplearning #nlp #llm
5.3K views06:07
Open / Comment