15/05/2021

DSRN Blogs

Discover the world with DSRN Blogs

Report reveales threats of Deepfake Satellite Images

Deepfake, AI-generated satellite images can pose threat to nations and agencies worldwide, a team of researchers warns. These bogus images might be used to create hoaxes starting from natural disasters to shoring other fake news. This  may also mislead international governments into conflicts.

A Deepfake, which may be a combination of “deep learning” and “fake,” is synthetic media. It is both AI generated photo and video content . It is often created to fool the content consumer. Although the content is often presented as a lighthearted joke in some situations, for instance, when a TikTok user impersonated Tom Cruise, deepfakes also can cause problems with varying severity when used maliciously.

The Guardian reports that this sort of false visual content is predominantly used for adult content, for instance, to map a female celebrity’s face onto the adult actor. It’s also used to spread false news information or to scam individuals or businesses. Additionally, to falsifying existing information, deepfakes can create a non-existing person’s profile from scratch. It may be further utilized for spying or other deceitful or illegal means.

deepfake

In August 2020, PetaPixel reported on the negative impact this sort of manipulated media can have. It can affect both on celebrities and on businesses who are impersonated. It has been acknowledged that detecting and maintaining with deepfake technology may be a costly and difficult process for any research group that’s prepared to tackle this.

However, deepfake now also present a threat to nations and security agencies within the sort of false and misleading satellite imagery, as first reported by The Verge. The bogus satellite images might create hoaxes about natural disasters or to copy false news. It could also “be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes.”

A recent study, led by University of Washington researchers, examined this concern and “its potentials in transforming the human perception of the geographic world”. The study points out that, the detection of deepfake has had reached an extent. But there are not any specific methods for detecting false satellite images especially .

The team simulated their own deepfake using Tacoma, Washington as a base map and placed onto it features extracted from Seattle, Washington, and Beijing, China. The high rises from Beijing cast shadows within the fake satellite image. While, the low-rise buildings and greenery were superimposed from the urban landscape found in Seattle.

The team explains that anyone unacquainted with this sort of technology would struggle to differentiate between real and faux results. Especially because  poor image quality attributes any odd details or colors often found in satellite images. Instead, researchers note that to spot fakes, you’ll examine the pictures supported color histogram, spatial domains, and frequency domains.

The lead author of the study, Bo Zhao, explains that the study’s goal was to boost public awareness of the technology which will be wont to misinform and to encourage precautions, with a hope that this study can encourage the event of systems that would mean fake satellite images among real ones.

“As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and knowledge. It is so that we will demystify the question of absolute reliability of satellite images or other geospatial data”. Zhao explained to UW News.

AI-generated images could create chaos and loss for several security agencies and strategists. But, the researcher also points out that AI-generated satellite images are in use for positive purposes, too. For instance, the technology can help to simulate locations from the past to review global climate change, unrestricted growth in urban areas, referred to as conurbation, or how a neighborhood may develop within the future.