Get Mystery Box with random crypto!

Artificial Intelligence

Logo of telegram channel artificial_intelligence_in — Artificial Intelligence A
Logo of telegram channel artificial_intelligence_in — Artificial Intelligence
Channel address: @artificial_intelligence_in
Categories: Technologies
Language: English
Subscribers: 67.79K
Description from channel

AI will not replace you but person using AI will🚀
I make Artificial Intelligence easy for everyone so you can start with zero effort.
🚀Artificial Intelligence
🚀Machine Learning
🚀Deep Learning
🚀Data Science
🚀Python R
🚀AR and VR
Dm @Aiindian

Ratings & Reviews

3.50

2 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

0

4 stars

1

3 stars

1

2 stars

0

1 stars

0


The latest Messages 5

2023-11-18 21:45:26
13.5K viewsedited  18:45
Open / Comment
2023-11-18 03:19:14 Now here is something unexpected,

OpenAI's Sam Altman exits as CEO because 'board no longer has confidence' in ability to lead.
https://www.bloomberg.com/news/articles/2023-11-17/sam-altman-to-depart-openai-mira-murati-will-be-interim-ceo
13.6K viewsedited  00:19
Open / Comment
2023-11-14 17:31:18
Copilot just works

You need zero effort to benefit from it. You install it and forget about it.

Copilot is one example of what I call "magic technology." It's completely transparent to you until you don't have it. Then, you realize how much it helps.

Privacy concerns aside, every developer can increase their productivity by 40% - 50% by installing Copilot.

At $10 every month, this is the most ridiculously cheap investment one can make.

One reason I love Github Copilot is I rarely spend time purely prompting it and describing my structure/software/goals. It just sort of knows and offers suggestions while I code.

One thing I absolutely hate about ChatGPT is describing, outlining, and giving a history of the problem, maybe even uploading docs... and then waiting for a solution.

This is just for writing code, but IMO this applies to anything you might use ChatGPT/LLMs for. We need less prompt engineering/agents crap and more deeply integrated things like Copilot is for writing code.
Join WhatsApp Channel
13.5K viewsedited  14:31
Open / Comment
2023-11-10 05:30:53
NVIDIA just made Pandas 150x faster with zero code changes.

All you have to do is:
%load_ext cudf.pandas
import pandas as pd


Their RAPIDS library will automatically know if you're running on GPU or CPU and speed up your processing.

You can try it in this colab notebook

GitHub repo: https://github.com/rapidsai/cudf
13.9K viewsedited  02:30
Open / Comment
2023-11-08 16:37:00
It takes $2-$3M to train a custom model from scratch using OpenAI!!

GPT-4 finetuning service cost starts at "$2-3 million" and requires "billions of tokens at minimum", It sounds terrifying, but could actually be a good deal to medium-sized companies. Think about how much resources you need to set up the pipeline in-house:

- Pay big salaries to top AI engineers ($300k+/yr). At least 5 of them.
- Pay eye-watering cloud bills or buy GPUs and rent facilities.
- Set up training infrastructure - really good distributed systems engineer required.
- Iterate lots of times on open-source models. You won't get it right in the first few tries.
- Scale up deployment pipelines.
- Monitor reliability.
- Worry about efficient serving.
- And even after all this: your finetuned Llama-2 will still trail far, far behind a finetuned GPT-4.

The amount of work that goes into a reliable LLM in production is mind-boggling.
12.8K viewsedited  13:37
Open / Comment
2023-11-06 10:12:21
Early access for grok.x.ai is out!

Elon Musk has introduced Grok, a direct rival to ChatGPT.

Grok is an AI model that is "intended to answer almost anything, and far harder, even suggest what questions to ask. Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don't use it if you hate humour"

It is beating gpt-3.5 in less than 2 months.

Gonna be interesting.

Go signup and get on waitlist: https://grok.x.ai/
13.6K viewsedited  07:12
Open / Comment
2023-11-04 17:31:56
Generative AI for Beginners
Another great effort by Microsoft on AI education. This one contains a series of lessons on generative AI, including an introduction to LLMs, prompt engineering fundamentals, building text generation/chat applications, and more.
13.3K views14:31
Open / Comment
2023-11-01 16:56:01 PyTorch or TensorFlow?

You talk to people online, and everyone is a die-hard PyTorch fan. You talk to people offline, and everyone is a die-hard TensorFlow fan.

Many people ask me the same question: which of these frameworks should you learn?

The research community rallied around PyTorch for a good reason: it's much easier to understand than TensorFlow. PyTorch code feels and looks like Python code. TensorFlow 2 is much better, but the first version left a bad aftertaste.

TensorFlow, however, built a more extensive ecosystem for productizing Machine Learning systems. What they lacked in clarity, they made up with tooling. They also added Keras, which improved the developer experience by 100x.

If you are starting today, which one should you learn?

I usually recommend that people pick the one everyone else around them uses. If you start working for a company that uses TensorFlow, learn TensorFlow. If you join a research lab that uses PyTorch, learn PyTorch.

You can also decide based on the material you are using. For example, these are my three favorite technical books (in no particular order):

1. Deep Learning with Python by François Chollet

2. Hands-on Machine Learning with Scikit-Learn, Keras, and Tensorflow by Aurélien Géron

3. Machine Learning with PyTorch and Scikit-Learn by Sebastian Raschka, PhD

The first two use TensorFlow and Keras. The last one uses PyTorch. If you have any of these books, learn the framework they use.

Ultimately, you may need both, and switching is easier than you think. The fundamental principles of building machine learning don't change. Everything else is a stylistic choice.

By the way, Keras 3.0 is coming out soon in the November, and it's a big deal. You can write your code in Keras and swap the backend to TensorFlow, PyTorch, or JAX without any changes. You can also combine different frameworks in the same codebase.

Keras 3.0 will be a standalone library, and you won't need TensorFlow anymore. I'm a huge fan, and I can't wait for everyone to try it.
13.6K views13:56
Open / Comment
2023-10-21 16:46:01 An hour spent improving your Software Engineering skills is more productive than an hour spent improving your Machine Learning skills.

I’m not saying ML is unimportant. Obviously it is.

I’m just saying I've seen more Data Scientists held back by their ability to deploy working software than by their ability to make proper modeling decisions.

Any Data Scientist can build a model in a Jupyter Notebook.

Fewer can take that model and deploy it in a production setting.

Even fewer can do this in a way that's is fault tolerant, scales well, and allows for easy iteration.

A strong understanding of SWE principles lets you build and deploy your models more efficiently and autonomously, which will better differentiate you from other Data Scientists.

Here's a shortlist of a few software engineering concepts I've found to be relevant with DS and ML:
1) REST and Micro-service architecture
2) Version control & CI/CD
3) Dependency injection
14.2K views13:46
Open / Comment
2023-10-16 16:13:01 If I had to start learning AI / ML all over again, this is what I would do differently:

𝐈’𝐝 𝐥𝐞𝐚𝐫𝐧 𝐏𝐲𝐓𝐨𝐫𝐜𝐡 𝐨𝐯𝐞𝐫 𝐓𝐞𝐧𝐬𝐨𝐫𝐅𝐥𝐨𝐰

TensorFlow is written in C++ and wrapped in Python. PyTorch is Python through-and-through. So it feels more natural. Also, the dynamic computational graph has an easier learning curve. So stick to PyTorch for getting started. You can learn TF later if you need to.

𝐈'𝐝 𝐥𝐞𝐚𝐫𝐧: 𝐭𝐨𝐨𝐥𝐬 𝐟𝐢𝐫𝐬𝐭, 𝐦𝐚𝐭𝐡 𝐚𝐧𝐝 𝐦𝐨𝐝𝐞𝐥 𝐭𝐡𝐞𝐨𝐫𝐲 𝐬𝐞𝐜𝐨𝐧𝐝

This contradicts almost every course I've ever come across. But I learn by DOING. Abstract concepts don't make sense to me unless I can apply them to real-world problems. So if I had to do it all over again, I'd start by learning how to use the most basic model possible (decision trees) to predict labels in the iris dataset. Then I'd practice on other datasets until I got comfortable, and then advance to logistic regression. Then I'd learn the math and theory behind logistic regression. Rinse and repeat.


𝐈’𝐝 𝐟𝐨𝐜𝐮𝐬 𝐨𝐧 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐦𝐮𝐜𝐡 𝐬𝐨𝐨𝐧𝐞𝐫
Success in ML is all about thinking through how your models can add value and then working backward. Traditional teaching leads to ML practitioners building models without a clear goal, which fails 95% of the time. By getting to deployment sooner, you train your mind to think closer to the end goal.

𝐈’𝐝 𝐨𝐧𝐥𝐲 𝐮𝐬𝐞 𝐫𝐞𝐦𝐨𝐭𝐞 𝐝𝐞𝐯 𝐞𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭𝐬

Having a remote dev environment is SO much easier since you don't have to worry about hardware or local package environments. Examples: Google Colab, Amazon Sagemaker, PyCharm Remote Development.

I'd start using Mlflow immediately

Building a model that will live in a Jupyter Notebook forever is very different from building a model that needs to be deployed and maintained in production. By using a framework like MLflow early on, you'll instill good habits from the start.

I'd wait longer to learn advanced NLP

It's good to start with basic NLP concepts like topic modeling and word2vec, but I'd avoid generative-based NLP until I had a solid foundation in the basics and at least 1-2 years of experience in the field.
14.2K viewsedited  13:13
Open / Comment