How to legislate ai
0(bright music) – Hey, ChatGPT, a quick question. – [ChatGPT] Sure Johnny, what’s up? – A while back I was talking to a friend who told me that AI has the
potential to destroy our society and potentially end humanity.
– Ah! The humanity! – That’s not true, right? – [ChatGPT] Well, it’s not
entirely out of the question. (tense music) – Warning of the dangers of AI. – About AI. – Artificial intelligence. – The threats AI poses
to the social order. – There’s been so much talk
about the danger of AI. – The risk that could lead
to the extinction of humans. – Does it worry you, AI?
– Somewhat. – And I always find this
pretty unsatisfying, because they talk about it in vague terms. A threat to democracy. – Enormous threats to democracy. – Losing control. – So could we lose
control of civilization? – 100%. – A danger to our society. So today I wanna show you
what those dangers and threats actually look like. Have been super deep
in all of the new laws that are coming out and being proposed. And they really give you a
solid idea of what lawmakers and regulators are worried about. How they think this new technology that is rapidly developing
could affect our societies. And I’m gonna lay it out in
the plainest terms possible. But first, real quick,
a 60-second explanation on what AI actually is. For decades, we’ve used
computers to do things that our human brains can’t do super well. We’ve developed entire languages
to talk to the computer, to give them very specific instructions on how to execute a task. It’s called code. And we’ve
been doing this for decades. It’s gotten really sophisticated. The difference now is that, instead of really specific
instructions from a human, we’ve built software that
teaches itself how to do stuff. The humans now just need
to gather tons of data from the world and feed
it into this software. And the whole point is
that we don’t really know what’s happening inside this black box. Being able to predict accurate information and solve problems based
on a bunch of raw input from the world is exactly
what our brains do. It is called intelligence. In this case, an artificial version. And it seemingly has the potential to change everything we do, which is exactly what makes it dangerous. So now let’s get specific. What do we mean by danger to humanity? How could this fancy
algorithmic computer software actually do something bad?
– Destroy. – And to get started there,
I wanna show you something. Today we’re gonna be
rolling the dice on AI, and look at six scenarios of how AI could negatively affect humans, and what we can do to prevent it. And as we do this, it’ll be helpful if you remember this
graphic, this black box, where humans tell the AI what to solve, give it a bunch of data, and
let it figure it out by itself. In this black box, lies the
potential promise and peril of this new technology. All right, here we go.
(tense music) First up, predictive policing. – Police should use all
the technology available to prosecute crime, but
not in a predictive way. So you are innocent until
you are discovered guilty, not the other way around. – [John] This is Carme Artigas. She’s an expert who spent 30
years in machine learning. She was the first Secretary of State of Artificial Intelligence in Spain, and now she works on the
UN Advisory Board on AI. – We will call for the need for
the global governance of AI. – When it comes to understanding
artificial intelligence and its effect on society, she’s really the best person to talk to. So AI, you give it a lot of data, you ask for it to solve a problem, and then it predicts the
answer to your problem. The more data you give it or train it on, the more accurate its results are. So like for those monitoring hurricanes, the more data it has on
sea surface temperature and air pressure and wind
speed and humidity levels and ocean heat content
and historical storm data, the more accurate it will be at predicting where the next hurricane is
and what it will look like. – What is that? – What if we applied the
same approach to crime? Imagine a world where the
local police department has access to data of all
kinds, which they already do. – We’re talking about
biometric data in general. That means face, that
means voice, even movement, and this type of records. If I’m in a park, what I don’t want is my
government is on real-time, recognizing who am I? Who am I with, at what
time, and doing what? Because we are innocent people.
We have done nothing wrong. – The police department who’s
in charge of fighting crime would have an incentive to use
a machine-learning algorithm and AI to take all of this data and use it to try to predict
who is going to commit a crime. I mean, it’s a temping idea. Imagine if we could actually prevent crime before it happens. – Let’s not kid ourselves. We’re arresting individuals
who have broken no law. – [Announcer] Minority report. – We don’t believe in legal systems that predict the rate of crime. The concerning elements here is that this could be used by
governments or private actors to track individuals without consent, infringe their privacy rights, establish a mass surveillance system, and there’s also a risk of
wrongful identification. – So this is already
kind of happening here in the United States.
(sirens wailing) Recently, police in Detroit
were looking for a thief, and they had security camera footage. They ended up using an AI algorithm to search driver’s license records, and they found what looked like a match. This man, who they arrested,
and who spent a night in jail before they realized that
they had the wrong person, the algorithm had inaccurately matched it. The police of the future
will definitely use AI to do their job better. But the nightmare scenario is
that the police departments get so thirsty for new data that they start tracking
everyone and everything in the name of getting ahead of crimes. So in this new AI bill in the EU, they’ve made it illegal to
collect all of this data and try to train AI
systems to predict crime. Because in their words,
people should always be judged based on their actual behavior. Okay, let’s see what else
the future holds here. (tense music)
(timer ticking) Yes, elections. We just
had one of those here. Experts are worried that
AI will affect elections, our democracy, because a huge part of
elections and democracy working is a sense of trust, a sense
of trust in the system itself, that your votes actually count, and the information that you
receive about the candidates and about what happened in the election. So one of the nightmare scenarios with AI is something that’s already happening, but it’s in its early phases. Deepfakes, something we
made a whole video about. And as we covered in our
previous video on deepfakes, it’s becoming easier and
easier to make a deepfake that looks like a politician or a leader, saying something they didn’t say. Lucky for us though, we humans are pretty good
at deciphering these fakes, partly because hundreds of
thousands of years of evolution has trained our brains
to be really discerning of human faces. It’s how we read other people. So for now, the effectiveness of deepfakes in swaying elections or
spreading a lot of misinformation has actually not been super big. But we’re just in the early
phases of all of this. Deepfakes, or all kinds
of synthetic media, meaning fake video, is going to get way better really quickly. You can imagine an election in
four years, where in Arizona, a series of robocalls are
placed using an AI system that has really authentic
sounding, deepfaked voices, alerting residents that
their local polling station has been taken over by a militia, and that for their own
safety, they should stay home. Or in Miami, a synthetic video goes viral of poll workers burning paper ballots, or tampering with voting machines. And people see this stuff,
and they believe it, and they stop believing in our delicate system called democracy. But you know what? This is actually not the thing that people are most worried about. Over the years, there’s been all kinds of image manipulation technologies, like when Photoshop came
out, people freaked out. We can manipulate images? – Technology makes it difficult, maybe even impossible to tell
what’s real, and what’s not. – But we all got really savvy
about that really quickly. We now know to be suspicious of images. The scarier result of this is actually that we start to doubt everything we see. – Is try to make people believe nothing, to lack trust of our institutions. – So what do you do about this? Well, in California, lawmakers are requiring online platforms like YouTube and Facebook
to find synthetic media and label it, or take it down. Some of these bills even prohibit people from posting election-related content that has been generated
or modified using AI, at least within a certain
timeframe of the election. The AI bill in Europe is
actually requiring anyone that makes deepfakes or synthetic media to code in an invisible watermark, something that we all couldn’t see, but a piece of software could detect it and see it and know that it’s fake. – But in the AI Act, where
we make compulsory by law, is that you must identify if
something has been generated by a human or by an AI, and
it’s going to be by law. – Okay, that’s the future of democracy. Let’s roll the dice again.
(tense music) Okay, this one’s
interesting. Social scoring. – So for us, social scoring is a way that the governments
can control population and lead to unfair
treatment or discrimination. – Imagine a world where your behavior, both online and off, was
tracked and tabulated to create a personal score. – For example, in apathetic country, the government deploys an
AI-driven social scoring system that could monitor
citizens’ online behavior. – [John] Where you live. – [Carme] Financial transactions. – [John] How good you are at
paying your loans on time. – [Carme] If they ever
complain with their government or not. – And this all contributes to a score. – And that ranking allows
you or prevents you to access to public
services, to housing loans. So for us, it’s a totally
unacceptable risk. – Okay, that all sounds really scary, but what’s crazy is this
kind of already exists here in the United States. – Credit scoring. I mean the US credit scoring is also a way to discriminate people. – Here in the US, we let corporations collect
a bunch of data about us, mostly our finances, and then use an algorithm
to assign everyone a score that affects our ability to
get loans, housing, jobs, and even how much we end
up paying for insurance. – [Announcer 1] Get your
FICO score for free today. – The credit score is totally normalized. We’re very okay with this. And yet we’re okay with it
because it’s not that invasive. So even this kind of benign
social scoring system already is discriminatory
against certain groups. Imagine a world where way
more information is scrubbed and used for your social score. Your employer could buy that data, and track more data of you on
the job while you’re working. And they could use all of that to evaluate whether or not you’re fit for a promotion, whether or not to hire
you in the first place. They could even track
your movement at work, analyze your face, your
enthusiasm towards the work, your conversations with colleagues, and then they could let the machine decide whether or not to keep you, or fire you. If you’re applying to university, you would send in your photo, your essays, your applications, your
social media handle, all to an AI-powered admission system that analyzes it in much more
detail than a human could, and decides who it lets in. Honestly, this sounds dystopian, but it’s also really efficient, and actually could be more
accurate if we got it right. Theoretically, taking out the human bias of who gets into the
university and who doesn’t. But it turns out that an
AI is actually biased, too. It’s biased to what information
it has been trained on. And so without some oversight, these AI social scoring systems could start to create major discrimination against certain groups, and we would never know it. Because all of the discrimination is happening inside of that black box. All we see is the output. What comes out on the other side. And we’re kind of primed
to think that it’s accurate because the big fancy machine did it. So many of you’re probably thinking that this already happens in China. There’s a social credit system. – If you thought the way
Facebook tracks you was scary, it’s got nothing on
the Chinese government. – It gives you a score
between 600 and 1,300. And depending on where you live, it can determine what kind
of school your kid can go to, what parts of the country
you can travel to, whether or not you can
use the high speed trains, what job you can get. And in some parts of China,
they’re experimenting with punishing low
credit score individuals with slower Internet speeds. Meanwhile, people with high credit scores get all kinds of perks. Better schools, high speed trains, but also their government
applications gets expedited if you have a high credit score. Again, this isn’t all of China, and it’s not all centralized
into one giant database. But with the advent of more and more powerful artificial
intelligence, no doubt, China’s social credit scoring system will become more robust and more invasive. And that is what lawmakers in
Europe and the United States are scared about. Europe places any kind of
social scoring using AI as an unacceptable risk. You can’t use AI systems
to rank or classify people based on how they act in society. – So this is something, that
according to our values, it’s not acceptable. It’s a prohibited use of AI. – One of the tricky things
about reporting this story (tense music) Yes, nuclear weapons. It seems like every one of these scenarios has a movie to go along with it. It turns out we create a lot of sci-fi about the scenario of
machines taking over. (tense music) But one big thing I’m
learning in this story is that the reality of these risks is much different than how
it’s portrayed in the movies. It’s often more boring,
but more dangerous. Like we saw with predictive policing, or with social scoring. But I would say for this
one, nuclear weapons, it’s actually kinda like the movies. In the movie “Terminator,” an AI-powered missile defense system, known as,
– [Announcer 2] Skynet. – Becomes self-aware, and launches an all-out nuclear
assault against humanity. The real fear here isn’t as extreme, but it’s a similar situation, where we fear giving the
machine too much autonomy to make these high-stakes
decisions about war, the ultimate of which is
launching a nuclear weapon. What’s tricky here is an
AI is often a lot better than a human at synthesizing
lots of information to make decisions that are
more likely to be accurate for the desired results. They can take into
account so much more data than a human brain can hold at once. And as the AI becomes better
and better at reasoning, it will become better than
us at making decisions that get the desired result, which in war, is a very difficult thing. So in a future where a lot
of our military systems are run by an AI with
sensors all over the place, drones and ships and
cameras and satellites, monitoring our enemy, mark my words, AI is going to become a
bigger and bigger part of our defense strategy. And yet you can imagine a
world where an AI system is in charge of making
real-time decisions, and one day it sees an adversary
conducting military tests with big rockets and missiles. These are just tests, but
the AI doesn’t know that. This also correlates
with some troop movement in the adversarious country, and some unusual communication traffic. The system sounds the alarm bells, the President and the
Congress are led into bunkers, and the AI system sends the command to an American submarine, telling them to launch
a nuclear weapon now. (tense music)
Okay, real quick, this scenario is not likely. It is not likely at all. Even if we gave a lot of
autonomy to AI systems, it’s very unlikely that the AI would be able to do
all of this on its own. But there’s still a chance that it could. And that somewhere within this black box, something would happen that
would lead to a nuclear launch that would be really catastrophic. So because of that, lawmakers have all moved
very quickly on this one. And there’s currently a bill
floating around the Senate called the Block Nuclear
Launch by Autonomous AI Act. The US is hoping that other
countries do this too, so we can all just agree, “Hey, the machine shouldn’t
be launching nukes. Like, can we all agree on that?” Okay, so nukes will be off the table soon. But there’s a bunch of other
very powerful weapon systems that are not nukes. In fact, this is already being used. In Ukraine and in Israel, where the military gets recommendations from its AI system on strike targets. That threatens to make war
more frictionless, easier, and less transparent as
to how decisions are made, and who should be held responsible. We’re doing a whole video
on how AI is affecting war, so stay tuned for that
coming in future weeks. For now, let’s roll the dice. (tense music) Okay, we’ve got critical sectors.
Well, that sounds boring. That’s because it is. Until it’s broken. Critical sectors are things
that you and I take for granted, but that are important
for our various survival. Pipelines, water,
electricity, transportation, food, communication systems. Most of us wouldn’t be able
to stay alive for very long if these systems went down. Okay, but there’s a
world where these systems are relying on artificial intelligence, machine learning algorithms, to help run them more efficiently. I mean, this is stuff
that the AI is so good at that humans just aren’t. Imagine a water treatment plant, which takes your sewage,
and turns it back into water that can keep you alive. Soon, most of the decisions
at this treatment plant will be run by an AI
system that makes millions and millions of small
decisions every minute, recognizing patterns and problems, optimizing water levels and chemical usage in ways that humans could never. Or think about traffic. An AI will run your traffic lights, your public transportation. It will adjust the traffic
flows in optimal ways, responding to real-time
information of traffic patterns and accidents, weather conditions. This will make your life better. It will reduce congestion and improve overall
transportation efficiency. So now just apply this
to so many of the systems that you interact with every day, that you don’t really think about. Humans won’t be necessary
at the water treatment plant or at the company that
runs the electrical grid, except for to come in and repair stuff, or do maintenance after the
AI tells them that it’s time. For the most part, the AI
will fix its own problems and learn from its mistakes, getting better and better
at an exponential rate. This sounds awesome, right? Yes, until we see what happens
when lightning strikes, and the power grid goes down. There’s limited backup power, and the AI has to start making decisions. This AI system is programmed
to reduce inefficiencies and maximize profit for
the company running it. So it analyzes everything it knows, and it decides to keep the limited power going into only the rich neighborhoods, the ones that consume more electricity and that pay their bills on time. – So these concerns are regarding biases into this critical
infrastructure management. While vulnerable populations, especially elderly or
sick or low-income areas could suffer or being prevented of very basic essential
services like electricity because a system could
exacerbate the inequality, putting the needs of welfare individuals over general welfare of all citizens. – So once again, we see that the AI could be discriminatory in a way that is not fair. But the other concern
here is the black box, that we don’t really know how the AI is making these decisions. Like one night, the water
treatment plant is humming along, but two of the bacteria sensors get bumped by something in the
water and it breaks them. They’re still working, but they’re not accurately
recording the bacteria levels in the water. But the AI system doesn’t know this. And it starts to inaccurately
balance the water. And soon contaminated water is being piped into every house in town. People are getting sick, and they’re flooding to the hospital. And it’s days before anyone realizes that it’s because of the water. Because again, no one was there on-site. Apply the same problem to traffic. You’ve got this great algorithmic software that is running your traffic lights. It uses GPS information
from everyone driving, to synthesize the best traffic
pattern to reduce congestion. But then one night, it
runs a software update, that slightly changes the format required to read the GPS coordinates. As the GPS data starts
flowing in the next morning during rush hour, all this coordinate data is now being interpreted completely wrong. Low-traffic areas are
suddenly highly congested. Tons of cars are suddenly
on these small roads. Commuters are stuck in traffic for hours, ambulances and fire trucks encounter unexpected traffic jams. The city runs into chaos. And again, no one really knows why. The humans have become so
out of touch with the system because the system is so smart, it takes five days before
the traffic technicians and engineers find out what’s going on. They fix the bug, and
things return to normal, but damage and even death has
occurred because of this bug. Critical sectors and
infrastructure are so important to keep not only our
lives flowing smoothly, but keeping us all alive. So we can’t mess with it. We can’t offload the responsibility to a machine learning algorithm that could potentially lead us astray, and we wouldn’t know why. So what do we do to prevent this? What can lawmakers do to protect us? And the answer is, open up the black box. – So you are running a
critical infrastructure. Show me that you have trained the data with representative data sets. Show me that you’re not biasing. Show me there’s no discrimination. And then I give you this good
quality certification product and you can run, as we have done with all
the industry in the past. – So like a lot of life or
death products in our life, like medicine and whatever, companies that use AI to
run these critical systems will need to show the government that they are assessing these risks and making sure they don’t happen. They’re gonna have to
be totally buttoned up with their cybersecurity so that these things don’t get hacked. So we’re still gonna be able to leverage the immense benefits of
artificial intelligence in running these systems, but we’re gonna do it with
responsibility and safety and a little bit of caution. And with that, let’s get to our last one. (tense music) Okay, we’re gonna end on a high note here. The fact is, if we do this right, advances in AI could
dramatically change our world for the better. In a world where AI runs our hospitals and our medical research,
we could save lives, find new drugs that treat diseases that once were untreatable. These systems will allow us to predict and prepare for extreme weather events. They’ll allow us to optimize
water use in our agriculture, monitor soil health, and even predict pest
outbreaks before they occur, dramatically reducing the
need for harmful pesticides and fertilizers. This is all very possible and it’s coming. So I am excited and optimistic
about the future of AI, especially when there are
smart people like Carme who are working on
legislation to keep guardrails around this technology, so that we can develop it responsibly and reap the benefits
while mitigating the risks. (upbeat music) – [ChatGPT] Well, you made
it to the end of the video. Congratulations. I always knew you would. (upbeat music)