Get Mystery Box with random crypto!

Neural Networks Engineering

Logo of telegram channel neural_network_engineering — Neural Networks Engineering N
Logo of telegram channel neural_network_engineering — Neural Networks Engineering
Channel address: @neural_network_engineering
Categories: Technologies
Language: English
Subscribers: 2.35K
Description from channel

Authored channel about neural networks development and machine learning mastering. Experiments, tool reviews, personal researches.
#deep_learning
#NLP
Author @generall93

Ratings & Reviews

3.00

2 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

0

1 stars

1


The latest Messages 2

2021-05-17 11:00:04 ​​Metric Learning Tips & Tricks

Hi everyone, I don't often have posts on this channel.
But today, I want to share some insights about what I'm working on.

Over the last year, I have been developing on a job matching system, and in the process, I have solved some interesting problems related to metric(similarity) learning.

I decided to collect all the interesting solutions into an article.

Here are some highlights:

- We have embeddings for professions and you can play with them online
- There is a way to train a model without labeled data, but it requires some tricks
- Hard Negative Mining does not work, but you can increase batch size instead
- It is possible to estimate embedding confidence
- We can micro-manage the model without re-training. Introducing the neural rules
- How do we deploy metric learning in production. Spoiler: with Qdrant
574 viewsnne_controll_bot, 08:00
Open / Comment
2020-12-15 16:29:23 For those who reacted with on a previous post. I wrote a Twitter thread on how I am building Qdrant with Rust. It is on Twitter because the development is still in progress, and I would like to tell you about some interesting details without a special blog-post.

Some topics of the thread:

- How Qdrant is useful?
- How it stores data and build indexes?
- How to keep data always available for search?
- How do I auto-generate documentation in Rust?

Your comments are welcome here and on Twitter!
2.8K viewsnne_controll_bot, 13:29
Open / Comment
2020-11-02 11:00:23 Qdrant - vector search engine

Since my last post about filtrable HNSW
I was working on a new Search Engine to give this idea a proper implementation.
And I finally published an alpha version of the engine called Qdrant.

Development is still in an early stage, but it already provides ElasticSearch-like conditions must, should and must_not which you can combine to represent an arbitrary condition.

Use-cases

You might need Qdrant in cases when a vector could not fully represent a sought object.
For example, a neural network might model a visual appearance of a piece of clothing, but can hardly consider its stock availability.

With Qdrant you can assign this feature as a payload and use it for filtering.

Among the possible applications:

- Semantic search with facets
- Semantic search on map
- Matching engines - e.g. Candidates and job positions
- Personal recommendations

Technical highlights

Qdrant is written in Rust, the language specially designed for system programming - the building of services that are used by other services.
Rust is comparable in speed with C but also protects from data races what is crucial for database applications.
Push the crab if you are interested in more Rust-specific details of the project.

The engine uses write-ahead logging. Once it confirmed an update - it won't lose data even in case of power shut down.

You can already try it with Docker image:

docker pull generall/qdrant

Simple search request could look like this:

POST /test_collection/points/search
{
"filter": {
"should": [
{
"match": {
"key": "city",
"keyword": "London"
}
}
]
},
"vector": [0.2, 0.1, 0.9, 0.7],
"top": 3
}

All APIs are documented with OpenAPI 3.0.
It provides an easy way to generate client for any programming language.

I would highly appreciate any feedback on the project, and I will be grateful if you give it a star on GitHub.
3.6K viewsnne_controll_bot, 08:00
Open / Comment
2020-09-16 15:59:05 Silero Speech-To-Text Models V1 Released

We are proud to announce that we have released our high-quality (i.e. on par with premium Google models) speech-to-text Models for the following languages:

- English
- German
- Spanish

Why this is a big deal:

- STT Research is typically focused on huge compute budgets
- Pre-trained models and recipes did not generalize well, were difficult to use even as-is, relied on obsolete tech
- Until now STT community lacked easy to use high quality production grade STT models

How we solve it:

-
We publish a set of pre-trained high-quality models for popular languages
- Our models are embarrassingly easy to use
- Our models are fast and can be run on commodity hardware

Even if you do not work with STT, please give us a star / share!

Links

- https://github.com/snakers4/silero-models
3.4K viewsAndrey, 12:59
Open / Comment
2020-08-31 18:16:03 ONNX and deployment libraries

Libraries like AllenNLP are great for model training and prototyping, they contain functions and helpers for almost any practical and theoretical task.
Some of these libraries even have functions for model serving, but they still might be a poor choice for a serving model in production.

Very same functionality, which makes them convenient for development, makes them hard to support in a production environment.
Docker image with only AllenNLP installed takes up a whole 1.9 GB compressed! It could hardly be called a micro-service.

In Tensorflow this problem was solved by saving computational graphs in a special serialization format, independent of training and preprocessing libraries.
This serialized view can later be served by the tensor serving service.
Good solution, but not universal - there are plenty of frameworks, like PyTorch, which does not follow Google's standard.

Now, this is a part where ONNX appears - an open standard for NN representation.
It defines a common set of operators - the building blocks of machine learning and deep learning models.
Not any valid Python-PyTorch model can be converted into ONNX representation. Only a subset of operations is also valid for ONNX.

Unfortunately, default implementation of most AllenNLP models does not fit this subset:

- AllenNLP model handles a vast variety of corner cases, conditions that are essentially python functions.
ONNX does not support arbitrary code execution, ONNX model should consist of computation graph only
- AllenNLP models take care of text preprocessing. It operates with dictionaries and tokenization. ONNX does not support these operations.

Luckily in most cases, AllenNLP models could be used as just a wrapper for actual model implementation.
For this, you need to have an AllenNLP model, which handles loss function, makes preprocessing, and interacts with the model trainer.
And also an internal class for the "pure" model, which implements standard nn.Module interface.
It should use tensors as input and output.
Internally it should construct a persistent computational graph.

This internal model now could be converted into the ONNX model and saved independently.

Having ONNX you can use whatever instrument you need to serve or explore your model.
3.3K viewsnne_controll_bot, 15:16
Open / Comment
2020-02-03 13:01:09 ​​Filterable HNSW - part 2

In a previous article on the filter when searching for nearest neighbors, we discussed the theoretical background.
This time I am going to present a C++ implementation with Python bindings.

As a base implementation of HNSW I took hnswlib, stand-alone header-only implementation of HNSW.

With new implementation it is possible now to assign an arbitrary number of tags to any point with a simple code:

# ids - list of point ids
# tag - tag id
hnsw.add_tags(ids, tag)

The group of points under the same tag could be searched separately from others:

query_vector = ...
tag_to_search_in = 42
# Search among points with this tag
condition = [[(False, tag_to_search_in)]]
labels, dist = hnsw.knn_query(query_vector, k=10, conditions=condition)

These groups could also be combined using boolean expressions. For example (A | !B) & C is represented as [[(0, A), (1, B)], [(0, C)]], where A, B, C are logical clauses if respective tag is assigned to a point.

If the group is large enough ( >> 1/M fraction of all points), knn_query should work fine. But if the group is smaller, it may need to build additional connections in the HNSW graph for these groups.

hnsw.index_tagged(tag=42, m=8)

Based on the HNSW with categorical filtering, it is possible to build build a tool that can search in specified geo-region only.

Find a full version of this article with more examples and explanations in my blog.
5.7K viewsnne_controll_bot, 10:01
Open / Comment
2020-01-16 11:00:20 Tools for setting up a new ML project.

Compiled a list of tools I find worth a try if you are going to set up a new ML project.
This list is not intended to be exhaustive overview and it does not include any ML frameworks or libraries.
It is focused on auxiliary tools that can make development easier and experiments reproducible.
Some of this tools I have used in real projects, others I just tried on a toy example, but found interesting to use in future.
4.6K viewsnne_controll_bot, 08:00
Open / Comment
2019-12-16 11:01:02 ​​Recently I found an interesting repository on GitHub.
Actually, it is not a single repository, but a whole project, created by a CAIR center for research at the University of Agder.
It includes a bunch of articles and different implementations of a novel concept called Tsetlin Machine.
The author claims that this approach can replace neural networks and is faster and more accurate.
This work itself looks quite marginal, it's not recent but didn't become widely used.
It is noticeable that it is alive only thanks to the enthusiasm of several people.

From public sources, I found only the overselling press release of their own university and a skeptical thread on Reddit. As rightly noted in the latter, there are quite a few red flags and imperfections in this work, including excessive self-citation, unconvincing MNIST experiments, a poorly written article that is difficult to read.

However, I still decided to spend a little time reading about this concept - to use finite automaton states with linear tactics as trainable parameters of the model.
States define if signals are used in a logical clause or not.
The model is trained with two types of feedback: first fights false-negative actuation of the Clause and the second, false-positive, respectively.

The author shows benchmarks of the model on a couple different tasks but pays small attention to the main problem - there is no method provided to make Tsetlin Machine truly deep.
Instead, he suggests to train it layer by layer like Hinton trained a Deep Belief Network.
This restriction won't let Tsetlin Machine equal with neural networks in any area.

On the other hand, there are no theoretical limitations for the discrete feedback propagation mechanism to exist.
I going to conduct some experiments with this concept, will keep you posted if something works out.
5.4K viewsnne_controll_bot, 08:01
Open / Comment
2019-12-01 11:01:13 ​​Filterable approximate nearest neighbors search

I did a little research on how to search in vector space if you also need to take into account additional restrictions: search in a subset, filter by a numerical criterion or geo.
The article turned out to be too large for the telegram channel format, so I’ll leave only the essence here.
The full article is available on my updated blog.

The main point is that with minor modifications of the state-of-the-art HNSM algorithm we can cover a variety of filtering cases.
Modifications are to add edges to a navigation graph to ensure that it is connected after filtering out some part of its nodes.

Looking at filtering by category we can see that adding edges within particular small categories solve the connectivity problem for them.
And large categories sustain its connectivity due to the law of the Percolation theory.
Filtering by categories with could be relatively easy be extended to the numerical range filtering and geo spatial index.

At the full version of this article I also present a couple experiments to prove this approach.
It also contains some consideration of how to avoid possible failures.
Take a look at it!
4.4K viewsnne_controll_bot, 08:01
Open / Comment
2019-11-10 20:56:03 ​​Partially trainable embeddings

Understanding the meaning of natural language require a huge amount of information to be arranged by a neural network.
And the largest part if this information is usually stored in word embeddings.

Typically, labeled data from a particular task is not enough to train so many parameters. Thus, word embeddings are trained separately on a large general-purpose corpora.

But there are some cases when we want to be able to train word embeddings in our custom task, for example:

- We have a specific domain with a non-standard terminology or sentence structure
- We want to use additional markup like in our task

In these cases, we need to update a small number of weights, responsible for new words and meanings. At the same time, we can't update pre-trained embeddings cause it will lead to very quick overfitting.

To deal with this problem partially trainable embeddings were used in this project.
The idea is to concatenate fixed pre-trained embeddings with additional small trainable embeddings. It is also useful to add a linear layer right after concatenation so embeddings could interact during training.
Changing the size of an additional embedding gives control over the number of parameters and, as a result, allows to prevent overfitting.

Another good thing is that AllenNLP allows implementing this technique without a single line of code but with just a simple configuration:

{
"token_embedders": {
"tokens-ngram": {
"type": "fasttext-embedder",
"model_path": "./data/fasttext_embedding.model",
"trainable": false
},
"tokens": {
"type": "embedding",
"embedding_dim": 20,
"trainable": true
}
}
}
3.8K viewsnne_controll_bot, 17:56
Open / Comment