Get Mystery Box with random crypto!

Data Science by ODS.ai 🦜

Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜 D
Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜
Channel address: @opendatascience
Categories: Technologies
Language: English
Subscribers: 51.62K
Description from channel

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Ratings & Reviews

2.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

1

1 stars

1


The latest Messages 17

2021-12-07 13:15:26 Upgini — dataset search automation library

Upgini is a new python library for an automated useful dataset search to boost supervised ML tasks

It enriches your dataset with intelligently crafted features from a broad range of curated data sources, including open and commercial datasets. The search is conducted for any combination of public IDs contained in your tabular dataset: IP, date, etc. Only features that could improve the prediction power of your ML model are returned.

Developers said that they wanted radically simplify data search and delivery for ML pipelines to make external data & features a standard approach. Like a hyperparameter tuning for machine learning nowadays.

A Free 30-days trial is available.

GitHub: https://github.com/upgini/upgini
Web: https://upgini.com
3.0K viewsedited  10:15
Open / Comment
2021-12-06 08:14:47 ​​EditGAN: High-Precision Semantic Image Editing

Nvidia researches built an approach for editing segments of a picture with supposedly realtime picture augmentation according to the segment alterations. No demo is available yet though.

All the photoshop power users should relax, because appereance of such a tools means less work for them, not that the demand for the manual retouch will cease.

Website: https://nv-tlabs.github.io/editGAN/
ArXiV: https://arxiv.org/abs/2111.03186

#GAN #Nvidia
3.3K views05:14
Open / Comment
2021-12-03 07:48:55 Visual Genome: the most labeled dataset
Scientists at Stanford University have collected the most annotated dataset with over 100,000 images. In total, the dataset contains almost 5.5 million object descriptions, attributes and relationships. You don't even have to download the dataset, but get the data you need by accessing the RESTful API endpoint using the GET-method. Despite the fact that the latest updates to the dataset are dated 2017, this is an excellent data set for training models in typical ML problems, from recognizing generated data using graph algorithms.
https://visualgenome.org/api/v0/api_home.html
2.4K views04:48
Open / Comment
2021-12-02 14:21:17 YaTalks — Yandex's conference for IT community.

Yandex will host its traditional conference on 3-4 December (starting tomorrow). Registration is open.

One of the tracks is devoted to Machine/Deep Learning with the focus on content generation.

Featured reports:

How too train text model on the minimal corpus
How Yandex.Browser Machine Translation works
Facial Expressions Animation

Conference website: https://yatalks.yandex.ru/?from=tg_opendatascience

#conference #mt #nlu
1.7K views11:21
Open / Comment
2021-11-30 16:21:59
End-to-End Referring Video Object Segmentation with Multimodal Transformers

Github: https://github.com/mttr2021/MTTR

Paper: https://arxiv.org/abs/2111.14821v1

Dataset: https://kgavrilyuk.github.io/publication/actor_action/

@ai_machinelearning_big_data
3.0K views13:21
Open / Comment
2021-11-25 18:34:58 ​​NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion

In this paper, Microsoft Research Asia and Peking University researchers share a unified multimodal (texts, images, videos, sketches) pre-trained model called NÜWA that can generate new or manipulate existing visual data for various visual synthesis tasks. Furthermore, they have designed a 3D transformer encoder-decoder framework with a 3D Nearby Attention (3DNA) mechanism to consider the nature of the visual data and reduce the computational complexity.

NÜWA achieves state-of-the-art results on text-to-image generation, text-to-video generation, video prediction, and several other tasks and demonstrates good results on zero-shot text-guided image and video manipulation tasks.

Paper: https://arxiv.org/abs/2111.12417
Code: https://github.com/microsoft/NUWA

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-nuwa

#deeplearning #cv #transformer #pretraining
2.9K views15:34
Open / Comment
2021-11-23 15:32:47 Augmented Reality for Haptic Teleoperation of a Robot with an Event-Based Soft Tactile Sensor

This paper presents a new teleoperation approach using an augmented reality-based interface combined with optimized haptic feedback to finely manipulate visually occluded objects.

The dynamic growth of emerging Augmented Reality (AR) interfaces has a high potential interest in robotic telemanipulation of objects under limited visibility conditions. On the user’s horizon, the real-world environment is overlayed by the virtual images of the robot end-effector and the object. To optimize the user experience in teleoperation, the visual augmentation is accompanied by a haptic stimulus. They both transmit the rendered signal of the contact force visually and haptically, respectively. The contact force is measured by an optical event-based tactile sensor (E-BTS) with a soft pad, vibrotactile stimuli are generated by the hand-held device (HHD) and the AR is projected in the head-mounted device (HMD).

Authors demonstrated experimentally their approach on teleoperated robot arm puncturing an occluded non-rigid membrane placed in vertical raw with similar membranes. A comparative study with 10 subjects has been carried out to quantify the impact of AR in a force control task with a human in the control loop. The results of the experiment show a promising potential application in the cable insertion in an industrial assembly task.

Video: YouTube

#AR
3.1K views12:32
Open / Comment
2021-11-19 18:02:38 Acquisition of Chess Knowledge in AlphaZero

69-pages paper of analysis how #AlphaZero plays chess. TLDR: lots of concepts self-learned by neural network can be mapped to human concepts.

This means that generally speaking we can train neural networks to do some task and then learn something from them. Opposite is also true: we might imagine teaching neural networks some human concepts in order to maek them more efficient.

Post: https://en.chessbase.com/post/acquisition-of-chess-knowledge-in-alphazero
Paper: https://arxiv.org/pdf/2111.09259.pdf

#RL
2.4K views15:02
Open / Comment
2021-11-19 16:37:23 ​​Swin Transformer V2: Scaling Up Capacity and Resolution

The authors present techniques for scaling Swin Transformer up to 3 billion parameters and making it capable of training with images of up to 1,536×1,536 resolution.

Vision models have the following difficulties when trying to scale them up: instability issues at scale, high GPU memory consumption for high-resolution images, and the fact that downstream tasks usually require high-resolution images/windows, while the models are pretrained on lower resolutions and the transfer isn't always efficient.

The authors introduce the following technics to circumvent those problems:
- a post normalization technique and a scaled cosine attention approach to improve the stability of large vision models;
- a log-spaced continuous position bias technique to effectively transfer models pre-trained at low-resolution images and windows to their higher-resolution counterparts;

In addition, they share how they were able to decrease GPU consumption significantly.

Swin Transformer V2 sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet-V2 image classification, 63.1 / 54.4 box / mask mAP on COCO object detection, 59.9 mIoU on ADE20K semantic segmentation, and 86.8% top-1 accuracy on Kinetics-400 video action classification.

Paper: https://arxiv.org/abs/2111.09883
Code: https://github.com/microsoft/Swin-Transformer

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-swin-v2

#deeplearning #cv #transformer
2.9K views13:37
Open / Comment
2021-11-18 16:03:03
GANformer: Generative Adversarial Transformers

Github: https://github.com/pengzhiliang/MAE-pytorch

Paper: https://arxiv.org/abs/2111.08960

Dataset: https://paperswithcode.com/dataset/coco

@ai_machinelearning_big_data
2.6K views13:03
Open / Comment