Australian Outlook

In this section

AI Erasure and Its Implications for Australian National Security  

21 Mar 2025
Guido Melo

What if an unpredictable leader chose to block Australian access to ChatGPT? Or if the owner of a major social media platform decided to rewrite historical narratives to align with corporate interests? Without robust frameworks to safeguard digital sovereignty, Australia could find itself exposed to manipulation, disinformation, and significant national security risks.

The concept of digital sovereignty, which refers to a nation’s control over its digital infrastructure, has emerged as a pivotal issue in global security discourse. For Australia, navigating this challenge requires, among other things, addressing the phenomenon of artificial intelligence (AI) Erasure. AI Erasure, a concept that arose from my research thesis, highlights the risks posed by the use or misuse of AI technology to alter, share, re-shape or misrepresent human history and records. This can be done by individuals, governments, and institutions, both private and public, in order to omit or change the perception of a historical fact.  

It is undeniable today that AI has emerged as both a transformative and disruptive force in an era marked by fast technological advancements. AI tools make it accessible to virtually anyone to create, modify, and manipulate any digital content. The risks of AI Erasure arise from the ease with which this can be achieved. 

The practice of erasure for political domination is deeply rooted in history and is not new. From the Romans enforcing Damnatio Memoriae, a decree that obliterated individuals from monuments to Stalinist Russia’s systematic removal of political rivals from photographs, regimes have long sought to control narratives. More recently, China’s censorship of the Tiananmen Square Massacre of 1989 exemplifies efforts to reshape public perception and collective memory. 

AI Erasure can reshape public perception in ways that could destabilise Australia’s relations both domestically as well as internationally, as digital manipulation threatens historical integrity, national security, and public trust. AI-driven misinformation could distort political discourse, marginalise Indigenous and minority histories, and weaken democratic institutions. Without robust policy safeguards, Australia is vulnerable to AI-generated disinformation campaigns, undermining both its international credibility and social cohesion.  

So, what is AI Erasure, and how can this modern phenomenon impact Australian national security? 

What if our biggest worry was not about how AI can impact our future but, instead, what if it could rewrite our past?  

Large Language Models (LLMs) learn by processing vast datasets, predicting patterns, and refining their parameters based on probabilities. When an AI model generates text, it calculates the most likely sequence of words based on its training data. Therefore, if the data used to train an LLM is incomplete, biased, or intentionally altered, the model reflects that distortion. AI erasure occurs when historical records, cultural knowledge, or contested narratives are systematically excluded, altered, or erased.  This can happen either by accident or design. In democratic societies, it can happen through gaps in data collection, biased algorithms, or the influence of powerful stakeholders; in authoritarian contexts, it may additionally be used as a deliberate tactic.   

Technology Developers address AI erasure challenges by implementing methods such as fine-tuning with targeted data to overwrite unwanted knowledge, employing gradient ascent techniques to reverse learning on specific data points, and utilising representation misdirection to disrupt neural activations associated with the information to be erased.  

Australia, like much of the world, is not fully prepared to address this challenge. Domestically, the conversation around AI erasure is still emerging. While the country is developing regulatory frameworks for AI governance, there is little discourse on how AI might be reshaping public memory, marginalising Indigenous knowledge, or distorting political history.   

AI erasure has potentially direct implications for Australia’s national security, public policy, and international diplomacy. AI’s ability to alter, manipulate, or systematically erase historical records and narratives poses an existential challenge to truth, accountability, and governance. The deliberate manipulation of digital records, whether through AI-generated misinformation, deepfakes, or selective erasure, can, for example, destabilise public trust in institutions. Consider an example from the Slovak elections. In late 2023, a deepfake audio of progressive, pro-Western candidate, Michal Šimečka, spread rapidly online. The synthetic voice convincingly claimed he intended to raise alcohol prices and rig the election. The disinformation was so effective that even fact-checkers were dismissed as biased, further eroding trust in democratic processes. The incident exposed a major weakness in digital information governance: that traditional fact-checking mechanisms on platforms like Facebook struggle to contain misinformation, particularly during high-stakes events such as elections. This example shows how AI-generated deepfakes and misinformation add complexity to security concerns. The technologies offer foreign actors the means to undermine sovereignty and security by targeting vulnerable communities, fostering distrust, and weakening institutional integrity. In Australia, recently, deepfake technology targeted Australian politicians, with fabricated videos of Foreign Minister Penny Wong and Finance Minister Katy Gallagher circulating in online investment scams.   

For Australia, the stakes are clear: Failing to address AI erasure could mean losing control over its own historical record, weakening its policy integrity, and becoming vulnerable to foreign disinformation campaigns. A robust national strategy for AI governance must be a priority if Australia is to safeguard its democratic institutions.  

Uncertain Future 

As the geopolitical landscape grows increasingly uncertain, marked by erratic US foreign policy decisions and European concerns over strategic autonomy, intentional policies moving towards establishing greater trust amongst even adversaries, such as China, is imperative. A multilateral approach to cooperation could provide stability in this evolving order. However, as of early 2025, Australia and China have yet to establish any formal agreement on AI, highlighting a critical gap in international governance.  

In early 2024, the Australian Senate established the Select Committee on Adopting Artificial Intelligence (AI) to examine the opportunities and impacts arising from AI adoption in Australia. One of its main focuses is combating potential threats posed by AI-generated misinformation and disinformation, particularly concerning electoral integrity. This must continue to be a priority in order to develop policies to counter the risks associated with AI Erasure. This is especially true when leaders can revoke years of AI safety policy, as President Trump did with an executive order.  

The human tendency to alter and erase the past has intensified in the digital age, with AI making this easier and more pervasive than ever. AI Erasure is not just a national security concern for Australia but a global challenge. Without strong safeguards and forward-thinking policies, even the most resilient democracies, including ours, risk being gradually undermined and erased over time. 

Guido Melo is a researcher, writer, and published author. He is a student in the Institute of Sustainable Industries & Liveable Cities (ISILC) at Victoria University in Melbourne. His work spans Australia, Brazil, the United States, Colombia, and Africa, focusing on AI Erasure and biases in artificial intelligence.