Inside the Rise of AI-Driven Combat

0
Share
Copy the link

– [Johnny] This is a battlefield where both armies have the
same weapons capabilities, but one of them has AI
and the other does not. Let’s see how a battle
between these two plays out. In the process, we’ll
learn why Vladimir Putin once told a group of young students that whoever leads an
AI will rule the world, and why The White House recently issued its first memo on AI, calling it an era-defining
technology with significant and growing relevance
to national security. To understand this, I’ve
teamed up with Shashank Joshi. – Hello. Good afternoon, Johnny. – [Johnny] The Defense
Editor at “The Economist” who’s likely one of the
most knowledgeable people on the planet when it comes to this topic. – [Johnny] We’re beginning to
see artificial intelligence pretty much in every single
decision made in war. So let me show you how
artificial intelligence is transforming war. are often the munition, the weapon system, the thing that does the killing, and this is the killer
drone, a killer robot, but what I think is that
often the key decisions in war are not the ones that the very end, they’re not the ones that happen just before a bomb goes bang. And I think that’s where
AI is really affecting the battlefield the most. – Okay, so we’ve made these
two hypothetical armies. Here on the battlefield, both of these armies are effectively trying to do the same thing. They’re both trying to somehow get eyes into enemy territory, to find the enemy’s most
valuable military targets, to fix their exact location, to determine what weapons to use and how to properly
target and deploy them, and then to actually
fire on those targets, to engage so that it
actually hits the target. And then finally, to
assess to see if it worked. This is called the kill chain,
and it is how war is fought. But executing this kill
chain is not straightforward. It’s the job of commanders on both sides to make countless decisions, often with incomplete or
unreliable information, and victory tends to favor those who can gather the best information and make accurate decisions the fastest. So let’s see how both of these sides approach their kill chain, one with AI and one without. (dramatic music) – Well, militaries will
use what they call sensors to go find things, that could be anything
that looks at the world. – [Johnny] Both have all sorts of sensors that are looking into
their enemy’s territory. – [Shashank] The classic example is an electro-optical sensor
on a satellite, right? And that’s fancy words for a camera that is just looking at stuff. – [Johnny] They both
have drones in the air scanning for targets. – [Shashank] A reconnaissance
team on the ground. – [Johnny] They’ve got radar sensing what’s flying through the air. The data from the
sensors is being recorded and eventually is sent back
to a central command center. This is where the decisions are made on what the army will do next using all of this data from their sensors. – So someone has to be sifting through all the satellite images. – And this is where we see
the first big advantage for the side with AI, computer vision. – Object recognition has just
exploded in how good it is. If you look back at the way that AI broke through and advanced around 2012, it was picking out images, in particular recognizing images of cats, and if you can recognize a
cat, you can recognize a tank. – [Johnny] The AI command center is armed with computer vision
software that is processing all of this incoming visual data and tagging it with what
they think they’re looking at and its precise location. – [Shashank] Now you have
machines automatically looking over those images,
picking out enemy planes, enemy formations and saying, “Hey, I think with 90% probability, that I’m pretty sure is a tank, and I’ve worked this
out on the drone itself, the exact coordinates it’s at.” – The other military that doesn’t have AI is doing the same thing, but without computer vision algorithms. – [Shashank] There’s a lot
of friction in this process. You had humans pouring
over every single image. – [Johnny] They’re slowly
identifying their targets. A tank here, an airfield
there, plotting them on a map, doing it manually. Oh, and by the way, this map only exists here in this command center. They haven’t even started
communicating all of these targets and intelligence to other command centers. Meanwhile, the computers over
in this other command center are automatically adding all
of their findings to a database that’s in the cloud that
other command centers have access to as well. – An AI-enabled army is
fusing all kinds of data, mobile phone records, data
from ship tracking websites, aircraft tracking websites,
consumer retail activity, who is buying products
near this naval base or near that airfield that
might indicate activity. So you’re fusing these
vast torrents of data, spitting out anomalies. – So their maps are
filling up much quicker, filling up with potential targets, and even context on each of these targets. – A commander who sees the targets then has to decide a lot of things. Are they gonna hit them or
not? Is it valuable enough? What am I gonna use? Do I use a plane, an artillery
battery, something else? These are all basically
the bread and butter of any command decision. – But this is still just the
beginning of the kill chain. These are hard decisions, but the AI-enabled
commander has some help. A software platform has fused
all of this data together and is starting to make
predictions and inferences, giving recommendations to the commander, taking into account not only the target they’re trying to hit, but also many other important factors. – I know that you have this many planes. I know that you have this
much fuel on the airfield, this much ammunition. Here is the optimal way
to destroy that target using these aircraft, coolant, fuel, oil, repairs, rest for the
crew, food for the crew. These are all so fundamental. So a lot of the stuff that was being done in your headquarters by dozens of mid-level officers with pens, papers, spreadsheets
can be done by software. So a human might be able
to do all of these things, but it might take them like
a couple of days to do it. The code can do it in
like a 10th of the time, way, way more quickly. – So while the traditional army is still manually plotting
potential targets, the AI army’s commander
is looking at a list, a list generated by his
software, a list of targets. The software is proposing that they could hit a group of tanks that they found near the front line. They could take out a bridge that serves as an important supply line. They could target an ammunition
depot that the AI believes that most of the enemy’s
ammunition reserves are being held. The commander evaluates
these recommendations from the software and decides to target the ammunition depot using some artillery cannons that the software has confirmed
have recently been serviced and have plenty of ammunition
for this operation. He taps a few buttons on a screen, and a mix of humans and software
start preparing to engage. This all happened very quickly. And over on the other side, they’re just starting to assemble an initial
list of valuable targets. The AI-enabled commander now has time to think about other potential targets. Its software is recommending
that he send a swarm of drones with explosives attached to them to destroy this fleet of enemy tanks. And here’s where we see yet another massive advantage of AI. These drones are piloted by humans, but as they get closer and
closer to their target, the enemy sends out jamming
signals to interfere with the connection between
the pilot and the drone. – Suddenly, I’m not able to pilot my drone because there’s all this radio noise overwhelming the signal. – [Johnny] The pilot can no
longer control the drone. – What if the drone didn’t
need to have a pilot? – [Johnny] Thanks to
recent advances in AI, this drone can start flying itself. – It is looking at the object, it’s comparing it to a library of images stored on the platform, and it’s saying, “I know I am supposed to hit this thing.” So it locks onto the target and it says, “I am gonna just keep going, I can maneuver to that target”
in the final last 100 meters, 200 meters, 300 meters of flight.” That’s a real shift in
the character of warfare when you have that precision guidance, not just on exquisite high-end systems, but on so many tiny little systems throughout the battlefield, – You can see where this is going. The AI-enabled army is able
to pull off these attacks much quicker with way more information, all enabled by algorithms that are crunching huge amounts of data. So even though these two
armies have the same hardware, what matters more and
more is what’s inside. – What really matters is the software. What really matters is
the algorithm inside. – Okay, so this was a hypothetical
and simplified scenario, but in the real world, this is effectively the
direction that warfare is going. Relying more and more on software. Let’s go through a few examples. Like in Gaza, where the Israeli military uses data from tapped phones and people’s locations to algorithmically
choose targets to strike. – There’s a kind of AI target system generating huge numbers of targets per day of suspected Hamas militants. – [Johnny] According
to six Israeli soldiers speaking to an Israeli publication, the IDF has based a lot of its airstrikes off
of AI-generated lists, treating these target
lists in their words, “As if it were a human decision.” Now the IDF does dispute
this characterization, but the point is AI algorithms are already being used on the
battlefield to choose targets. Then there’s Ukraine. – Ukraine above all is
going to be the key test bed because that’s where we’re seeing some of the most advanced
military AI play out on both sides, tested, experimented. – The major innovation
here has been in drones, something we made a whole video about. More and more software and computer vision is allowing these drones to complete the final leg of their flight, to hit their target even when
the signal has been jammed and the pilot has lost
control of the drone. Here in the United States, we are seeing more and more
tech startups who are preparing for this new form of warfare. One that relies so much
more on sensors and data and algorithms that can
make sense of it all to make wartime decisions. – This is all about autonomy, and that will be delivered more than anything else by software. – The US military has been developing an AI system that can
take a massive amount of aerial and satellite imagery and pick out potential enemy targets, sometimes more accurately than humans can. The UK has a similar program, and because data from
sensors is such a huge part of this capability, you see more cloud computing companies like Amazon and Microsoft
getting involved in defense. One data analytics company, Palantir, works a lot in defense now, and says that their software
allows the US military to conduct an operation that
used to require 2,000 people, and now they can do it with just 20 because software can now
do a lot of that work of analyzing, synthesizing,
and helping make decisions that used to be the
work of lots of people. The holy grail for any military trying to get ahead on this is to aggregate all of the data from all
of their many sensors into one integrated platform, one central command software that can help commanders make decisions. The US Army is working
on one ambitious project that does just that. – [Reporter] Project Convergence
uses high tech equipment the Army says is more efficient
and faster than humans. – So these are just a few examples of real world applications of
AI being integrated into war. There are so many more, and there will continue to be
new programs and initiatives as this technology
changes and as we learn. – If it really does all the things that proponents say it can do, then there are some people
who will draw a comparison to things like the advent of gunpowder. – Okay, but wait, shouldn’t
we slow down and ask? Yes, we can do this. This is something that
technology allows us to do now. But is it ethical? Is it legal? Is it safe? Could it get out of control if we give more and more information and decision making
abilities to a computer. Will AI make war easier and quicker? Will taking humans out of the
process make it less humane? – A lot of people who are against what they call killer robots will say a human must always retain control over the decision to use force. You know, do I kill them or not? I think that’s the wrong
way to think about it. The bit that will always
be most controversial is the final lethal decision. You could argue the really
consequential step was taken when you identified something as a target in the first place. Would we rather have a commander who is pressing a red
button to approve a target that’s offered to him by a computer, but is is mashing the
red button on loop saying yep, yep, yep? That doesn’t seem like
human control to me, even though from a
technical perspective, yeah, they’re signing off on each target, but they’re doing so
without any real engagement, moral or intellectual
with what they are doing. – Like any technology, it’s
not one way or the other. It comes down to how well we understand it and how we use it. Shashank argues that what’s most important isn’t whether or not
we give decision making and autonomy to these machines, but whether or not the
human who is in control and actually making the decisions based on what the machine is telling them, understands the software, what it’s good at and what it’s not. – Whereas if you have a computer system that is offering a bank of targets and the commander is scrutinizing those and knows the system, he knows what it’s been trained on, and therefore he knows when
it might work effectively and when it might not, and he
knows when not to rely on it. He knows when to question it,
when to mistrust its inputs. That seems to me much
more like human control, even if he is ceding quite a lot of control in some circumstances, but holding it back in others. For me, that is the debate. It’s way more complex than, you know, hand over to the killer robots or not. It’s when are you trusting the systems? What do you know about them? Have you interrogated them to understand what their weak spots are? That to me is really where
the debate should be. – As we saw earlier, in war, whoever can get the most information and make the most accurate, fastest decisions is usually the victor. Humans tend to assume that decisions made by a machine are correct, which makes us particularly prone to be swooned by AI-generated information. I mean, I feel this,
when I talk to ChatGPT, I often feel a bias to believe it because it sounds right. The machine just spit it
out in this digestible, authoritative, natural language. So there will be a lot of incentive and momentum to let the
machine do a lot of the work to make decisions quickly and deal with the consequences later, something we may not be able
to catch until it’s too late. – Machines are gonna make mistakes. They’ll do some horrible things. They will encourage you to bomb a building that you thought was a
military target that turns out to be a shelter for civilians,
but hey, guess what? Humans do that as well. The question is, are machines gonna reduce the
incidence of those things? – The reality is there’s no going back. No one’s gonna pump the brakes on integrating AI technology into warfare. No one’s gonna wait for the ethicists and the lawyers to decide if this is ethical or legal or safe. Geopolitics will force this forward. All we can hope for is wisdom and foresight from those
making the decisions on how we use these systems so that we approach it
humanely and responsibly. – The future is going to be humans working with AI to wage war.

Comments

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *