Get Mystery Box with random crypto!

Am Neumarkt 😱

Logo of telegram channel amneumarkt — Am Neumarkt 😱 A
Logo of telegram channel amneumarkt — Am Neumarkt 😱
Channel address: @amneumarkt
Categories: Technologies
Language: English
Subscribers: 293
Description from channel

Machine learning and other gibberish
Archives: https://datumorphism.leima.is/amneumarkt/

Ratings & Reviews

3.50

2 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

0

4 stars

1

3 stars

1

2 stars

0

1 stars

0


The latest Messages

2021-12-15 01:09:11
#visualization #fun

https://www.githubwrapped.com/
76 viewsMarkt Mai, edited  22:09
Open / Comment
2021-12-14 23:40:26 #ML #Transformers


Alammar J. The Illustrated Transformer. [cited 14 Dec 2021]. Available: http://jalammar.github.io/illustrated-transformer/

So good.
83 viewsMarkt Mai, 20:40
Open / Comment
2021-12-13 10:40:44 #DS #visualization

https://percival.ink/

A new lightweight language for data analysis and visualization. It looks promising.

I hate jupyter notebooks and I don't use them on most of my projects. One of the reasons is low reproducibility due to its non-reative nature. You changed some old cells and forgot to run a cell below, you may read wrong results.
This new language is reactive. If old cells are changed, related results are also updated.
101 viewsMarkt Mai, edited  07:40
Open / Comment
2021-12-11 13:19:22 #ml #rl

How to Train your Decision-Making AIs
https://thegradient.pub/how-to-train-your-decision-making-ais/

The author reviewed  "five types of human guidance to train AIs: evaluation, preference, goals, attention, and demonstrations without action labels".

The last one reminds me of the movie Finch. In the movie, Finch was teaching the robot to walk by demonstrating walking but without "labels".
117 viewsMarkt Mai, 10:19
Open / Comment
2021-12-05 12:52:55
#visualization

Hmmm my plate is way off the planetary heath diet recommendation.

Source:
https://www.nature.com/articles/d41586-021-03612-1
152 viewsMarkt Mai, edited  09:52
Open / Comment
2021-12-02 13:36:41 #DS

Just in case you are also struggling with Python packages on Apple M1 Macs


I am using the third option: anaconda + miniforge.

https://www.anaconda.com/blog/apple-silicon-transition
167 viewsMarkt Mai, edited  10:36
Open / Comment
2021-12-02 00:31:41 11月25号是消除对妇女的暴力行为国际日,来自metaLab的研究人员在随机选择一百万条#MeToo推文后,仔细阅读了转发次数超过 100 次的示例,在894 条推文中只有 8 条是关于性侵犯或围绕#MeToo主题的经历的实际推文,其余绝大多数是新闻媒体和政治讨论,其中大多数都忽略了#MeToo运动核心的具体问题和幸存者的声音,设计师Kim Albrecht想通过这个可视化项目来展示被忽视的针对女性暴力问题
140 viewsMarkt Mai, 21:31
Open / Comment
2021-12-01 00:36:22
#visualization

An interactive Visual Vocabulary:

https://ft-interactive.github.io/visual-vocabulary/
138 viewsMarkt Mai, 21:36
Open / Comment
2021-11-29 18:08:42 #tool

https://www.jetbrains.com/fleet/
132 viewsMarkt Mai, 15:08
Open / Comment
2021-11-19 14:59:44 #ML

SHAP (SHapley Additive exPlanations) is a system of methods to interpret machine learning models.
The author of SHAP built an easy-to-use package to help us understand how the features are contributing to the machine learning model predictions. The package comes with a comprehensive tutorial for different machine learning frameworks.

- Python Package: [slundberg/shap](https://shap.readthedocs.io/)
- A tutorial on how to use it: https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/

---

The package is so popular and you might be using it already. So what is SHAP exactly? It is a series of methods based on Shapley values.

> SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain the output of any machine learning model.
>
> -- [slundberg/shap](https://github.com/slundberg/shap)

Regarding Shapley value: There are two key ideas in calculating a Shapley value.
- A method to measure the contribution to the final prediction of some certain combination of features.
- A method to combine these "contributions" into a score.

SHAP provides some methods to estimate Shapley values and also for different models.

The following two pages explain Shapley value and SHAP thoroughly.

- https://christophm.github.io/interpretable-ml-book/shap.html
- https://christophm.github.io/interpretable-ml-book/shapley.html

References:
- Lundberg SM, Lee SI. A unified approach to interpreting model predictions. of the 31st international conference on neural …. 2017. Available: http://papers.nips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
- Lundberg SM, Nair B, Vavilala MS, Horibe M, Eisses MJ, Adams T, et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nature Biomedical Engineering. 2018;2: 749–760. doi:10.1038/s41551-018-0304-0

---
I posted [a similar article years ago in our Chinese data weekly newsletter](https://github.com/data-com/weekly/discussions/27) but for a different story.
195 viewsMarkt Mai, edited  11:59
Open / Comment