Deepfake tech takes on satellite maps
Such AI-generated imaginary poses serious military and political challengesEuropost
While the concept of “deepfakes,” also known as an AI-generated synthetic imagery, has been decried primarily in connection with involuntary depictions of people, the technology is dangerous (and interesting) in other ways as well. Most recently, for instance, researchers have shown that it can also be used to manipulate satellite imagery to produce real-looking - but totally fake - overhead maps of cities.
The team, led by Bo Zhao from the University of Washington, used an AI algorithm to generate deepfakes by feeding the traits of learned satellite images into different base maps. They could use Tacoma's roads and building locations, for example, but superimpose Beijing's taller buildings or Seattle's low-rises. You can apply greenery, too. While the execution isn't flawless and a close inspection showx that the fake maps aren’t as sharp as the real one, it's close enough that scientists believe you might blame any oddities on low image quality.
The study was not intended to alarm anyone but rather to show the risks and opportunities involved in applying this relatively infamous technology to cartography. For instance, among the positive uses for deepfaked satellite snapshots is that simulating locations from the past can help us understand climate change, study urban sprawl or predict how a region will evolve by filling in blanks. It could conceivably even be used to make realistic-looking modern maps from ancient hand-drawn ones.
However, AI-created fakes could be used for misinformation.With the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. And as in 2019, the director of the National Geospatial Intelligence Agency, the organisation charged with supplying maps and analysing satellite images for the US Department of Defense, implied, AI-manipulated satellite images can be a severe national security threat.
A hostile country could send falsified images to mislead military strategists - they might not notice a missing building or bridge that could be a valuable target. Fakes could also be used for political aims, such hiding evidence of atrocities or suppressing climate science.
Therefore, the researchers hope theirwork will help develop a system to catch satellite deepfakes in the same way that early work exists to spot human-oriented fakes.
Notably, it might be a race against time - it didn't take long for early deepfake tech to escape from academia into the real world, and that might well happen again.