Get Mystery Box with random crypto!

We donated $25k to the Machine Intelligence Research Institute | AudD

We donated $25k to the Machine Intelligence Research Institute.

The long-term potential of humanity is unimaginably enormous. We can’t even dream of the number of stars our distant descendants will settle on and how diverse their forms, abilities, and feelings will be.

Scientists researching existential risks estimate the chance of that long-term potential being destroyed by AI as 1/10 within the next 100 years. For comparison, they estimate the probability of an existential catastrophe caused by a naturally-originated pandemic as 1/10000; by a climate change as 1/1000 per century.

AI researchers don’t know how to make an artificial general intelligence such that it won’t destroy humanity; they think the default way the AI will be created would lead to a catastrophe.
No one has come up with a way to solve the problem of aligning preferences of a teachable agent switch human preferences. It is well understood that it’s impossible to achieve just by defining some rules or designing an accurate utility function (just like it’s impossible to come up with a wish a genie would have to fulfill in a way you intended to). The researchers have shown that any statically defined utility function won’t be aligned with human preferences. No one has come up with an algorithm that would create agents doing what we’d want them to do if we were smarter.
Among the ways that can potentially work, there are, e.g., algorithms that create agents that want to satisfy our preferences but aren’t sure what those preferences are, and thus every time they aren’t sure what we would want them to do, they ask. How to create such algorithms, scientists and researchers don’t know yet.
Experts in this field estimate the chances of an existential catastrophe caused by AI a lot higher than 1/10.

The creation of the first AGI will be the event that will determine the history of humanity. The long-term potential of our species can be destroyed - but if we succeed, we’ll get an assistant that will help us solve all of the other problems humanity is facing.

The estimates of when AGI will be created vary among researchers. But if humanity knew that aliens would arrive on Earth in a few decades, it would already begin to prepare. The creation of AGI is an event far more important than an encounter of another intelligent species - and we almost certainly know that we have less than a century remaining. We believe that AI Existential Safety is the most important thing people can work on right now.

Machine Intelligence Research Institute is one of the organizations that work on reducing the AI-related existential risk. They do research to ensure that smarter-than-human AI will have a positive impact.

You can read more on Artificial General Intelligence in “Human Compatible”, a book by Stuart Russel. He is a professor at UC Berkeley and author of the most popular textbook on machine learning used by 1500 universities around the world.
You can read more on existential threats to humanity in “The Precipice”, a book by Toby Ord. He is a researcher at the Oxford University’s Future of Humanity Institute and the founder of Giving What We Can.