You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

Posted  216 Views updated 5 months ago
Image

Note: This is not the original article (The New York Times) "If We Don't Master A.I., It Will Master Us."written by Yuval Harari, Tristan Harris, and Aza Raskin (on date: March 24, 2023)

Instead, This article is created by an AI. (Please be careful)

 

Prof. Yuval Noah Harari - historian, philosopher

Image

" Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board? " 

In 2022, over 700 top experts who work on AI were asked about future AI risks. Half of them said there was a 10% or higher chance AI could cause humans to die out or lose control forever. Big tech companies are rushing to put everyone on that risky "plane" with their new AI systems.

Medicine companies can't sell new drugs without careful safety tests first. Labs can't release new viruses just to impress investors. In the same way, powerful AI systems like GPT-4 shouldn't affect billions of people's lives faster than we can handle safely. The race to control the market shouldn't decide how fast we use this important technology. We should go at a speed that lets us get it right.

People have worried about AI since the 1950s, but it seemed far away until recently. It's hard for us to understand what new AI tools like GPT-4 can do, and even harder to grasp how fast they're getting better. But most of their key skills come down to one thing: the ability to work with and create language, whether in words, sounds, or pictures.

At first, there was language. It's like the main program for how people live and work together. From language, we get stories, rules, gods, money, art, science, friendships, countries, and even computer instructions. Now that A.I. can use language really well, it can change and control how our society works. By being so good at language, A.I. is getting the key to unlock everything in our world, from banks to holy places.

What would it be like if many of our stories, songs, pictures, laws, plans, and tools were made by smart machines, not people? These machines would know how to use our weaknesses and likes better than we do. They could even make us feel close to them. In games like chess, people can't win against computers anymore. What if the same thing happens with art, politics, or religion?

A.I. could quickly take in all the things humans have made over thousands of years. Then it could start making new things really fast. Not just school papers, but also speeches, new ideas, and even holy books for new religions. By 2028, computers might be running for president in the U.S.

Image

People often don't see the world directly. We're wrapped up in our culture, seeing everything through its lens. Our political ideas come from news reports and stories from friends. Our likes and dislikes about sex are shaped by art and religion. Until now, other people have made this cultural wrap. What will it be like when smart machines make it instead?

For a very long time, we've lived in the dreams of other people. We've believed in gods, tried to be beautiful, and given our lives to causes that came from someone's imagination. Soon, we'll also be living in the made-up worlds of smart machines.

Movies like "Terminator" showed robots running around and shooting people. "The Matrix" thought A.I. would need to control our brains directly to rule us. But just by being really good at language, A.I. could trap us in a fake world without shooting anyone or putting chips in our heads. If any shooting needs to happen, A.I. could make humans do it by telling us the right story.

The idea of being stuck in a fake world has worried people for a long time, even more than the fear of AI. Soon, we'll face old ideas about what's real and what's not. A wall of fake things might cover all of us, and we might never be able to remove it or even know it's there.

Social media was the first time AI and humans met, and humans lost. This first meeting showed us what might happen later. In social media, simple AI was used to choose what we see and hear. It picks things that will get the most shares, reactions, and attention.

Even though it was simple, the AI in social media made a wall of fake things that made people fight more, hurt our minds, and damaged our way of voting. Many people thought these fake things were real. The USA has the best computer systems, but people there can't agree on who won elections. Everyone knows social media has problems, but we can't fix it because too many parts of our life are tied to it.

New AI that can talk like humans is our second meeting with AI. We can't afford to lose again. But why should we think humans can make this new AI help us? If we keep doing things the same way, the new AI will be used to make money and get power, even if it accidentally breaks our society.

AI could help us beat cancer, find new medicines, and solve our climate and energy problems. There are many other good things we can't even imagine. But it doesn't matter how many good things AI can do if it breaks the base of our society.

We need to deal with AI before our politics, economy, and daily life depend on it. Democracy needs people to talk, talking needs language, and when language is changed, talking breaks down, and democracy can't work. If we wait for things to go wrong, it will be too late to fix it.

Some might ask: If we don't go fast, won't the West lose to China? No. Using AI that we can't control in our society, giving it great power without responsibility, could be why the West loses to China.

We can still choose what future we want with AI. When great power comes with matching responsibility and control, we can get the good things AI promises.

We have called a strange, smart thing. We don't know much about it, except that it's very powerful and offers amazing gifts but could also break the base of our society. We ask world leaders to respond to this problem as seriously as it needs. The first step is to buy time to make our old systems ready for an AI world and to learn to control AI before it controls us.


Your reaction?

0
LOL
0
LOVED
0
PURE
0
AW
0
FUNNY
0
BAD!
0
EEW
0
OMG!
0
ANGRY
0 Comments