Get Mystery Box with random crypto!

​​Swin Transformer V2: Scaling Up Capacity and Resolution Th | Data Science by ODS.ai 🦜

​​Swin Transformer V2: Scaling Up Capacity and Resolution

The authors present techniques for scaling Swin Transformer up to 3 billion parameters and making it capable of training with images of up to 1,536×1,536 resolution.

Vision models have the following difficulties when trying to scale them up: instability issues at scale, high GPU memory consumption for high-resolution images, and the fact that downstream tasks usually require high-resolution images/windows, while the models are pretrained on lower resolutions and the transfer isn't always efficient.

The authors introduce the following technics to circumvent those problems:
- a post normalization technique and a scaled cosine attention approach to improve the stability of large vision models;
- a log-spaced continuous position bias technique to effectively transfer models pre-trained at low-resolution images and windows to their higher-resolution counterparts;

In addition, they share how they were able to decrease GPU consumption significantly.

Swin Transformer V2 sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet-V2 image classification, 63.1 / 54.4 box / mask mAP on COCO object detection, 59.9 mIoU on ADE20K semantic segmentation, and 86.8% top-1 accuracy on Kinetics-400 video action classification.

Paper: https://arxiv.org/abs/2111.09883
Code: https://github.com/microsoft/Swin-Transformer

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-swin-v2

#deeplearning #cv #transformer