🔥 Burn Fat Fast. Discover How! 💪

Data Science by ODS.ai 🦜

Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜 D
Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜
Channel address: @opendatascience
Categories: Technologies
Language: English
Subscribers: 51.80K
Description from channel

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Ratings & Reviews

2.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

1

1 stars

1


The latest Messages 29

2021-04-02 11:57:10 ​​EfficientNetV2: Smaller Models and Faster Training

A new paper from Google Brain with a new SOTA architecture called EfficientNetV2. The authors develop a new family of CNN models that are optimized both for accuracy and training speed. The main improvements are:

- an improved training-aware neural architecture search with new building blocks and ideas to jointly optimize training speed and parameter efficiency;
- a new approach to progressive learning that adjusts regularization along with the image size;

As a result, the new approach can reach SOTA results while training faster (up to 11x) and smaller (up to 6.8x).

Paper: https://arxiv.org/abs/2104.00298

Code will be available here:
https://github.com/google/automl/tree/master/efficientnetv2

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-effnetv2

#cv #sota #nas #deeplearning
17.8K viewsedited  08:57
Open / Comment
2021-03-29 15:46:43 ​​Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning

Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category.
The authors suggest several practical ideas to improving model performance on this task:
- using augmentations (synonym replacement, random insertion, random swap, random deletion) together with triplet loss
- using curriculum learning (two-stage and gradual)

Paper: https://arxiv.org/abs/2103.07552

Code: https://github.com/jasonwei20/triplet-loss

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-nlptriplettricks


#deeplearning #nlp #fewshotlearning #augmentation #curriculumlreaning
16.3K viewsedited  12:46
Open / Comment
2021-03-29 13:43:40 Silero TTS Released

Surprise! A quick pre-release of Silero Text-to-Speech models!

Speakers

10 voices (each available in 16 kHz and 8 kHz):

- 6 Russian voices;
- 1 English voice;
- 1 German voice, 1 Spanish voice, 1 French voice;

Why is this Different?

- One-line usage;
- A large library of voices;
- A fully end-to-end pipeline;
- Naturally sounding speech;
- No GPU or training required;
- Minimalism and lack of dependencies;
- Faster than real-time on one CPU thread (!!!);
- Support for 16kHz and 8kHz out of the box;

Links

- Try our TTS models here;
- Quick summary;
- Performance benchmarks;

Stay tuned for much more detailed PR releases and torch.hub release soon!
5.3K views10:43
Open / Comment
2021-03-27 13:51:33
3.3K views10:51
Open / Comment
2021-03-27 12:44:38 ​​Predicting body movement based on data from socks

Researchers from #MIT has opened a new vector of opportunity for #techwear by making smart socks.

Website: https://www.csail.mit.edu/news/smart-clothes-can-measure-your-movements
Video: http://senstextile.csail.mit.edu/file/overview.mp4
Paper: https://www.nature.com/articles/s41928-021-00558-0
4.2K views09:44
Open / Comment
2021-03-26 00:18:36 ​​Finetuning Pretrained Transformers into RNNs
Microsoft+Deepmind+...

Transformers is the current SOTA in language modeling. But they come with significant computational overhead, as the attention mechanism scales quadratically in sequence length. The memory consumption also grows linearly as the sequence becomes longer. This bottleneck limits the usage of large-scale pretrained generation models, such as GPT-3 or Image transformers.

Several efficient transformer variants have been proposed recently. For example, a linear-complexity recurrent variant has proven well suited for an autoregressive generation. It approximates the softmax attention with randomized or heuristic feature maps but can be difficult to train or yield suboptimal accuracy.

This work converts a pretrained transformer into its efficient linear-complexity recurrent counterpart with a learned feature map to improve the efficiency while retaining the accuracy. To achieve this, they replace the softmax attention in an off-the-shelf pretrained transformer with its linear-complexity recurrent alternative and then finetune.

Pros:
+ The finetuning process requires much less GPU time than training the recurrent variants from scratch
+ Converting a large off-the-shelf transformer to a lightweight inference model w/o repeating the whole training procedure is very handy in many downstream applications.

arxiv.org/abs/2103.13076
3.8K views21:18
Open / Comment
2021-03-22 06:58:02 ​​LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval

Pre-training transformers simultaneously on text and images proved to work quite well for model performance on multiple tasks, but such models usually have a low inference speed due to cross-modal attention. As a result, in practice, these models can hardly be used when low latency is required.

The authors of the paper offer a solution to this problem:
- pre-training on three new learning objectives
- extracting feature indexes offline
- using dot-product matching
- further re-ranking with a separate model

LightningDOT outperforms the previous state-of-the-art while significantly speeding up inference time by 600-2000× on Flickr30K and COCO image-text retrieval benchmarks.

Paper: https://arxiv.org/abs/2103.08784

Code and checkpoints will be available here:
https://github.com/intersun/LightningDOT

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-lightningdot


#pretraining #realtime #ranking #deeplearning
16.7K viewsedited  03:58
Open / Comment
2021-03-16 17:37:30 ​​Revisiting ResNets: Improved Training and Scaling Strategies

The authors of the paper (from Google Brain and UC Berkeley) have decided to analyze the effects of the model architecture, training, and scaling strategies separately and concluded that these strategies might have a higher impact on the score than the architecture.

They offer two new strategies:
- scale model depth if overfitting is possible, scale model width otherwise
- increase image resolution slower than recommended in previous papers

Based on these ideas, the new architecture ResNet-RS was developed. It is 2.1x–3.3x faster than EfficientNets on GPU while reaching similar accuracy on ImageNet.

In semi-supervised learning, ResNet-RS achieves 86.2% top-1 ImageNet accuracy while being 4.7x faster than EfficientNet-NoisyStudent.

Transfer learning on downstream tasks also has improved performance.

The authors suggest using these ResNet-RS as a baseline for further research.


Paper: https://arxiv.org/abs/2103.07579

Code and checkpoints are available in TensorFlow:
https://github.com/tensorflow/models/tree/master/official/vision/beta

https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-resnetsr


#deeplearning #computervision #sota
19.0K views14:37
Open / Comment
2021-03-13 11:37:44 ​​Contrastive Semi-supervised Learning for ASR

Nowadays, pseudo-labeling is the most common method for pre-training automatic speech recognition (ASR) models, but in the case of low-resource setups and domain transfer, it suffers from a supervised teacher model’s degrading quality. The authors of this paper suggest using contrastive learning to overcome this problem.

CSL approach (Contrastive Semi-supervised Learning) uses teacher-generated predictions to select positive and negative examples instead of using pseudo-labels directly.

Experiments show that CSL has lower WER not only in comparison with standard CE-PL (Cross-Entropy pseudo-labeling) but also under low-resource and out-of-domain conditions.

To demonstrate its resilience to pseudo-labeling noise, the authors apply CSL pre-training in a low-resource setup with only 10hr of labeled data, where it reduces WER by 8% compared to the standard cross-entropy pseudo-labeling (CE-PL). This WER reduction increase to 19% with a teacher trained only on 1hr of labels and 17% for out-of-domain conditions.


Paper: https://arxiv.org/abs/2103.05149

#deeplearning #asr #contrastivelearning #semisupervised
17.3K views08:37
Open / Comment
2021-03-12 11:44:05 ​​Self-training improves pretraining for natural language understanding

Authors suggested another way to leverage unlabeled data through semi-supervised learning. They use #SOTA sentence embeddings to structure the information of a very large bank of sentences.

Code: https://github.com/facebookresearch/SentAugment
Link: https://arxiv.org/abs/2010.02194
2.5K views08:44
Open / Comment