Get Mystery Box with random crypto!

Data Science by ODS.ai 🦜

Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜 D
Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜
Channel address: @opendatascience
Categories: Technologies
Language: English
Subscribers: 51.69K
Description from channel

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Ratings & Reviews

2.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

1

1 stars

1


The latest Messages 13

2022-06-21 14:43:37 DALL-E Mini Explained with Demo

Tech report:

- Financed by Google Cloud and HF, essentially an advertising campaign for JAX, 8 person team
- 27x smaller than the original, trained on a single TPU v3-8 for only 3 days + ~3 weeks for experiments, 400M params
- 30m image-text pairs, only 2m used to fine-tune the VQGAN encoder
- Could use preemptible TPU instances
- Pre-trained BART Encoder
- Pre-trained VQGAN encoder
- Pre-trained CLIP is used to select the best generated images
- (so the actual cost probably is actually ~1-2 orders of magnitude higher)
- (compare with 20k GPU days stipulated by Sber)
- The report is expertly written and easy to read
3.1K views11:43
Open / Comment
2022-06-15 11:02:20 The Cat is on the Mat

Interesting approach to be combined with Ngram embeddings when span boundaries are fuzzy.

I guess can be used downstream with existing sentence parsers.

Such models can be rough and dirty, cheap to train and robust.

- https://explosion.ai/blog/spancat
3.8K views08:02
Open / Comment
2022-06-02 19:00:26
Hi, our friends @mike0sv and @agusch1n just open-sourced MLEM - a tool that helps you deploy your ML models as part of the DVC ecosystem

It’s a Python library + Command line tool.

TLDR:
MLEM can package an ML model into a Docker image or a Python package, and deploy it to Heroku (we made them promise to add SageMaker, K8s and Seldon-core soon :parrot:).

MLEM saves all model metadata to a human-readable text file: Python environment, model methods, model input & output data schema and more.

MLEM helps you turn your Git repository into a Model Registry with features like ML model lifecycle management.

Read more in release blogpost: https://dvc.org/blog/MLEM-release
Also, check out the project: https://github.com/iterative/mlem
And the website: https://mlem.ai

Guys are happy to hear your feedback, discuss how this could be helpful for you, how MLEM compares to MLflow, etc.
Ask in the comments!

#mlops #opensource #deployment #dvc
2.9K views16:00
Open / Comment
2022-05-25 13:57:13
Imagen — new neural network for picture generation from Google

TLDR: Competitor of DALLE was released.

Imagen — text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. #Google key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model.

Website: https://imagen.research.google

#GAN #CV #DL #Dalle
3.4K views10:57
Open / Comment
2022-04-12 21:28:36 Silero TTS V3 Finally Released

We have just released a brand new Russian speech synthesis model.

We have made a number of promises we kept:

- Model size reduced 2x;
- New models are 10x faster (!);
- We added flags to control stress;
- Now the models can make proper pauses;
- High quality voice added (and unlimited "random" voices);
- All speakers squeezed into the same model;
- Input length limitations lifted, now models can work with paragraphs of text;
- Pauses, speed and pitch can be controlled via SSML;
- Sampling rates of 8, 24 or 48 kHz are supported;
- Models are much more stable — they do not omit words anymore;

Next steps:

- Release models for the CIS languages, English, some European languages and Hindic languages
- Even further 2-4x speed up
- Updated stress model
- Phonemes support and and built-in voice transfer

Links:

- GitHub - https://github.com/snakers4/silero-models#text-to-speech
- Colab - https://colab.research.google.com/github/snakers4/silero-models/blob/master/examples_tts.ipynb
- Russian article - https://habr.com/ru/post/660565/
- English article - https://habr.com/ru/post/660571/
3.4K views18:28
Open / Comment
2022-04-11 18:04:28 ​​Big step after first DALL·EDALL·E 2

In January 2021, OpenAI introduced DALL·E. One year later, their newest system, DALL·E 2, generates more realistic and accurate images with 4x greater resolution.


The first DALL·E is a transformer model. It receives both the text and the image as a single stream of data containing up to 1280 tokens, and is trained using maximum likelihood to generate all of the tokens, one after another. This training procedure allows DALL·E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt.

In the second DALL·E they reformated method and now it is CLIP + diffusion model.
CLIP to encode text prior and diffusion model to decode resulting embeding to high resolution image.
So it’s simply GLIDE, but with some tweaks. To generate high resolution images, they train two diffusion upsampler models.

But the results are amazing. Despite that it is cherry picks of course :))

- paper
- blog with images and demos
- video
5.5K views15:04
Open / Comment
2022-04-03 11:36:25
Exploiting Explainable Metrics for Augmented SGD

A new explainability metrics that measure the redundant information in a network's layers and exploit this information to augment the Stochastic Gradient Descent

Project

Code: https://github.com/mahdihosseini/rmsgd

Paper: https://arxiv.org/pdf/2203.16723v1.pdf

Dataset: https://paperswithcode.com/dataset/mhist

@ArtificialIntelligencedl
9.4K views08:36
Open / Comment
2022-03-25 13:29:24
Global Tracking Transformers

Github: https://github.com/xingyizhou/GTR

Demo: https://github.com/facebookresearch/detectron2/blob/main/GETTING_STARTED.md

Paper: https://arxiv.org/abs/2203.13250v1

Dataset: https://paperswithcode.com/dataset/mot17

https://t.me/ArtificialIntelligencedl
12.4K views10:29
Open / Comment
2022-03-23 15:33:38
NeuralSpeech is a research project in focusing on neural network based speech processing

Github: https://github.com/microsoft/NeuralSpeech

Paper: https://arxiv.org/pdf/2109.14420v3.pdf

Speech Research: https://speechresearch.github.io/

Dataset:
https://paperswithcode.com/dataset/aishell-1

@ai_machinelearning_big_data
12.2K views12:33
Open / Comment
2022-03-16 10:09:31 Deep Neural Nets: 33 years ago and 33 years from now

Great post by Andrej Karpathy on the progress #CV made in 33 years.

Author's ideas on what would a time traveler from 2055 think about the performance of current networks:

* 2055 neural nets are basically the same as 2022 neural nets on the macro level, except bigger.
* Our datasets and models today look like a joke. Both are somewhere around 10,000,000X larger.
* One can train 2022 state of the art models in ~1 minute by training naively on their personal computing device as a weekend fun project.
* Today’s models are not optimally formulated, and just changing some of the details of the model, loss function, augmentation or the optimizer we can about halve the error.
* Our datasets are too small, and modest gains would come from scaling up the dataset alone.
* Further gains are actually not possible without expanding the computing infrastructure and investing into some R&D on effectively training models on that scale.


Website: https://karpathy.github.io/2022/03/14/lecun1989/
OG Paper link: http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf

#karpathy #archeology #cv #nn
14.7K viewsedited  07:09
Open / Comment