Get Mystery Box with random crypto!

Big Data Science

Logo of telegram channel bdscience — Big Data Science B
Logo of telegram channel bdscience — Big Data Science
Channel address: @bdscience
Categories: Technologies
Language: English
Subscribers: 1.44K
Description from channel

Big Data Science channel gathers together all interesting facts about Data Science.
For cooperation: a.chernobrovov@gmail.com
💼 — https://t.me/bds_job — channel about Data Science jobs and career
💻 — https://t.me/bdscience_ru — Big Data Science [RU]

Ratings & Reviews

1.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

0

4 stars

0

3 stars

1

2 stars

0

1 stars

2


The latest Messages 4

2022-06-09 12:46:27 TOP-10 Data Science and ML conferences all over the World in June 2022:
1. Jun 13, Machine Learning Methods in Visualisation for Big Data 2022. Rome, Italy. https://events.tuni.fi/mlvis/mlvis2022/
2. Jun 14-15, Chief Data & Analytics Officers, Spring. San Francisco, CA, USA. https://cdao-spring.coriniumintelligence.com/
3. Jun 15-16, The AI Summit London. London, UK. https://london.theaisummit.com/
4. Jun 20-22, CLA2022: The 16th International Conference on Concept Lattices and Their Applications. Tallinn, Estonia. https://cs.ttu.ee/events/cla2022/
5. Jun 19-24, Machine Learning Week, Predictive Analytics World conferences. Las Vegas, NV, USA. https://www.predictiveanalyticsworld.com/machinelearningweek/
6. Jun 20-21, Deep Learning World Europe. Munich, Germany. https://deeplearningworld.de/
7. Jun 21, Data Engineering Show On The Road. London, UK. https://hi.firebolt.io/lp/the-data-engineering-show-on-the-road-london
8. Jun 22, Data Stack Summit 2022. Virtual. https://datastacksummit.com/
9. Jun 28-29, Future.AI. Virtual. https://events.altair.com/future-ai/
10. Jun 29, Designing Flexibility to Address Uncertainty in the Supply Chain with AI. Chicago, IL, USA. https://www.luc.edu/leadershiphub/centers/aibusinessconsortium/upcomingevents/archive/designing-flexible-supply-chains-with-ai.shtml
219 views09:46
Open / Comment
2022-06-08 06:53:34 Development in Python according to 12 SaaS principles with the Python-dotenv library
ML modelers and data analysts don't always write code like professional programmers. To improve the quality of the code, use simple methodology for developing web applications or SaaS. It recommends:
• use of declarative formats for registration to establish the time and strength of new commitments joining the project;
• have a clean agreement with the underlying system, providing portability between environments;
• start deployment on modern cloud platforms, eliminating the need to administer servers and systems;
• reduce spread between origin and production, ensure continuous deployment for rapid agility;
• scale without major changes in tooling, architecture, or development methods.

To implement these SaaS ideas, it is proposed to build applications on 12 repositories:
1. One codebase is version controlled, many are deployed
2. Explicitly declare and isolate the dependency
3. Keep health in the environment
4. Treat supporting services as attached resources
5. Strictly separate the stages of assembly and launch
6. Use the application as one or more stateless processes
7. Export services via port binding
8. Modify parallelism by scaling with the process model.
9. Maximum reliability due to fast start-up and smooth shutdown
10. Portability and credibility of environments from development to production through tests
11. Log, view event stream logs
12. Performs administration/management tasks as one-time processes

To implement all this for a Python program open library Python-dotenv. It reads key-value pairs from the .env file and can consider them as environment variables. If the application meets the requirements of the environment, running it during development is not very practical, because the developer needs to set these environment variables themselves. By adding Python-dotenv to your application, you can simplify the development process. The library itself loads the settings from the .env file, while remaining configurable through the environment.
You can also load the configuration without environment changes, parse the configuration as a stream and .env files in IPython. The tool also has a CLI interface that allows you to manipulate the .env file without manually opening it.
https://github.com/theskumar/python-dotenv
174 views03:53
Open / Comment
2022-06-06 08:06:54 AI + quantum computing = quantum memristor
A memristor, or memory resistor, is a kind of building block for an electronic circuit. The first concept of it was created about 10 years ago. They are a frequency switch that remembers its state (on or off) after the perception of power, similar to synapses - connections between neurons in the human brain, electrical conduction, the frequency of which decreases or weakens depending on how many charge frequencies passed through them in the past .
Theoretical memristors act as artificial neurons capable of both computing and storing data. Therefore, neuromorphic (brain-like) computers based on memristors work well with artificial neural networks, i.e. ML company.
Unlike classical computers, which turn transistors on or off to symbolize data as 1 or 0, quantum qubits are used. Qubits can be in a state of superposition when they combine 1s and 0s with each other. The more qubits connected together with a quantum computer, the more its computing power can grow exponentially.
The quantum merristor is based on the flow of photons, directly in superpositions, where each individual photon can travel along the effect paths created by the laser on the glass. One of the mechanisms in this combined photonic circuit is used to measure the flux of these photons, and this data, through a complex electronic communication circuit, controls the transmission along the path. As a result, he turned out to be like a memristor.
Usually memristive behavior and quantum effects do not combine. Memristors act as a consequence of measuring their internal data, and spectacular effects have a high fragility value when it comes to external interference such as measurement. The researchers overcame this controversy by designing the interactions within their device so that they were sufficiently scrutinized to thus memristivity, but scrutinized enough to retain the detected behavior.
The advantage of using a quantum memristor in quantum ML over conventional quantum circuits is that the memristor, unlike any other quantum component, has memory. The next measurement is the connection of several memristors in the aggregate, the increase in the number of photons in each memristor and the amount of accumulation in which they matter in each number of measurements.
https://spectrum.ieee.org/quantum-memristor
172 views05:06
Open / Comment
2022-06-03 08:52:39
#test
What statistical test is suitable to check the hypothesis about the differences between small matched (dependent) samples
Anonymous Quiz
35%
Wilcoxon signed-rank test
14%
Student's t-test
24%
Mann–Whitney U test
27%
Fisher's combined probability test
49 voters163 views05:52
Open / Comment
2022-06-01 07:21:15 How to evaluate changes in data models quickly? Use Datafold!
You can identify and evaluate changes in different versions of the same data model by writing your own script or using the data inheritance function built into dbt. But for the average business user or novice date analyst, this is too much of a hassle. In use cases, Datafold (https://www.datafold.com/) is a cloud-based product with useful features, including data quality assurance, data monitoring, monitoring, and alerting. The function of calculating column statistics and helps to evaluate subsequent changes in real conditions, in particular, to discuss changes between sets of dates based on a column and values. For large projects, integration with dbt is used. Datafold works with a direct connection to a custom data store and uses Github to compare changes, make changes to dbt models to improve the quality of data collection.
In practice, Datafold can be used to set up product analysts for A / B testing in order to increase preferences and a collection of product features, data engineers - for regression testing of ETL pipelines and users of BI systems for reporting.
Use case: https://medium.com/geekculture/what-if-you-could-compare-changes-in-your-data-models-now-you-can-75f039580d08
239 views04:21
Open / Comment
2022-05-30 16:51:36 Live monitoring of ML and software metrics on one platform
In cases where machine learning is discovered, it is important to constantly monitor data and structures. Even the ML model itself has remained the same, the nature of the data can change, which can significantly affect user experience. There are many software monitoring platforms on the market today, where various system and business metrics are collected to reflect the most important platform monitoring data and generate a platform. For example, Grafana, Datadog, Graphite, etc.
There are also tools for monitoring ML machine learning systems like Neptune, Amazon SageMaker Model Monitor, Censius, and other MLOps environments. But it is possible to prevent monitoring the operation of a machine learning system with classical software engineering monitoring on the same platform. This is achieved with New Relic, a telemetry platform for remote monitoring of mobile and web applications that allows you to collect, receive and be notified of all telemetry data from any source in one place. Thanks to a large number of open source tools, New Relic can work with data sources and sinks.
Sending data from the ML system to New Relic is implemented using the ml-performance-monitoring Python library with Quick Source, which is available on GitHub (https://github.com/newrelic-experimental/ml-performance-monitoring).
https://towardsdatascience.com/monitor-easy-mlops-model-monitoring-with-new-relic-ef2a9b611bd1
205 views13:51
Open / Comment
2022-05-27 09:58:10
#test
How do Shuffle operations affect the execution speed of a distributed program?
Anonymous Quiz
33%
increase
43%
decrease
24%
there is no any effect
46 voters165 views06:58
Open / Comment
2022-05-25 06:31:03 MEGAscaling with quantum ML
Theoretically, quantum computers could be more powerful than any conventional computer, especially at finding prime factors of numbers, the mathematical basis of modern encryption that protects banking and other sensitive data. The more components known as qubits are connected to each other in a quantum computer, where multiple particles can instantly influence each other no matter how far apart they are, the more its processing power can grow exponentially.
One potential application of quantum ML is the simulation of quantum systems, such as chemical reactions, to create new drugs. But the average performance of an ML algorithm depends on how much data it has. The amount of data ultimately limits the performance of machine learning. Therefore, to simulate a quantum system, the amount of training data that a quantum computer might need will grow exponentially as the system being modeled gets larger. This potentially eliminates the advantage of quantum computing over classical computing.
The scientists proposed to link additional qubits to the quantum system that the quantum computer should model. This additional set of "auxiliary" qubits can help the quantum ML circuit to simultaneously interact with many quantum states in the training data. So the quantum ML scheme can work even with a relatively small number of auxiliary devices. In practice, it is still quite difficult to implement this idea, but it can be tested within the framework of the experiments of CERN, the largest particle physics laboratory in the world.
https://spectrum.ieee.org/quantum-machine-learning
195 views03:31
Open / Comment
2022-05-23 06:10:19 Estimating the information content of data: a new method from MIT
Information and data are different things. Not all data are valuable. How much any information from data fragments can be obtained? This question first arose in the 1948 paper "A Mathematical Theory of Communication" by MIT Professor Emeritus Claude Shannon. One breakthrough result is Shannon's idea of entropy, which allows one to estimate the amount of information inherent in any random object, including random variables that model allergy data. Shannon's results laid the foundation for information theory and modern telecommunications. The concept of entropy has also found its way into the field of computer science and machine learning.
Using Shannon's formula can quickly become computationally intractable. This requires accurate calculation of the probability models of the data and all possible occurrences of the data within the probabilistic framework. This disease becomes rare, for example, a survey where a positive test result is identified by hundreds of interacting manifestations, and all of them are unknown. With only 10 unknowns, the data already has 1000 implementations. With many hundreds of possible manifestations, there are more than atoms in the aggregate, which makes entropy calculation an absolutely intractable disease.
MIT researchers have developed a new method for estimating approximations to many information quantities, such as the Shannon entropy, using probabilistic inference. The work is presented in the AISTATS 2022 conference paper. The key takeaway is that, instead of listing all descriptions of algorithms for using probabilistic inference, first conclude which explanations are great, and use them to build building entropy estimates. It has been proven that this inference-based approach can be much faster and more accurate than opposing approaches.
Estimation of entropy and information in a probabilistic model is fundamentally difficult, since it often requires solving a multidimensional complexion problem. In many cases past work has done value estimates for some special cases, but new entropy estimates by inference (EEVI) return a first approach that can give accurate upper and lower bounds for a wide range of values based on information theory. We can get a number that is less than it, and a number that is higher. The difference between high and low values gives an idea that we should be sure about the low values. the value of large computational resources, which can be reduced between the outer boundaries to use, which "compresses" the true value with a wide range of resources. You can also take into account how informative different variables in the models are for each other. A new, particularly useful method for finding probabilistic patterns in research such as medical diagnostics.
https://news.mit.edu/2022/estimating-informativeness-data-0425
215 views03:10
Open / Comment
2022-05-20 18:02:19
#test
Key difference between window and aggregate functions is
Anonymous Quiz
2%
Aggregate functions operate on a set of values to return a range of values
0%
They are the same, but the window functions are more difficult to write
69%
Window functions operate on a set of values to return a range of values
29%
Aggregate functions return single value for each row from the underlying query
42 voters145 views15:02
Open / Comment