[widget id="surstudio-translator-revolution-3"]

Australian government bans DeepSeek: National security déjà vu

14 February 2025
Antoine Pace, Partner, Melbourne

After the explosive debut of Generative AI models, particularly in the United States of America, a new contender has recently entered the AI landscape.

Despite only being founded in 2023, DeepSeek, a Chinese tech company, has developed an artificial intelligence (AI) model which has drawn significant attention, both from users, governments and various financial markets. DeepSeek has been seen as a disrupter in a market of disrupters, with a powerful large language model rivalling ‘traditional’ competitors developed at a reduced cost. However, this innovative tech has caused commotion globally with concerns about issues like data security, hosting of personal information in China, compliance with relevant privacy laws, and censorship.

Following government crackdowns in Italy, Taiwan, and U.S. agencies like NASA and the Pentagon, a number of governments in Australia at a State and Federal level have followed suit on the DeepSeek ban. For instance, on 4 February 2025, the Commonwealth Department of Home Affairs ordered all non-corporate Commonwealth entities to delete DeepSeek from government devices and to stop using it altogether.

Same story, different app.

Like the TikTok ban that dates back to 2023, the Australian government is citing national security risks as the reason for DeepSeek’s takedown. In a directive from the Department of Home Affairs, Secretary Stephanie Foster pointed to an ‘unacceptable level of security risks’ to the Australian government.[1]

We’ve taken a look at the DeepSeek’s Terms of Use and Privacy Policy, and we can see why! Here are some of the particularly risky terms we’ve identified:

  1. References to “Automatic” collection of extensive user data, including IP address, unique device identifiers, keystroke patterns and rhythms and system language, payment data, and third-party source, all of which would be stored in China, raising concerns as to cross-border disclosure and compliance with the Australian privacy laws.[2]
  2. Broad rights over user data, including tracking usage across devices, training AI models with user data and sharing with third parties (including sharing with public authorities and law enforcement presumably based in China).[3]
  3. Broad references to services being provided by the ‘corporate group’ who will process information provided and automatically collect information.
  4. Operation of the Terms of Use under the laws of the People’s Republic of China and a requirement for any disputes to be litigated in China, posing major challenges for Australian governments, businesses and individuals due to weaker enforcement options that are difficult to navigate.
  5. Broad liability disclaimers by DeepSeek, and minimal contractual assurances regarding privacy or security.

Suffice to say that the authors will not be installing the DeepSeek app on their devices any time soon!

What does all this mean for local businesses and individuals in Australia?

The fact that the Department of Home Affairs and other governments have taken such swift and definitive actions should also raise alarm bells for private users. We would recommend to all businesses and other users in Australia to tread carefully.

Of course, what we say about DeepSeek can also apply to other AI models. Recently Google removed its long-standing prohibition against using AI for weapons and surveillance systems, marking a significant shift in the company’s ethical stance on AI development, and OpenAI moved from not-for-profit, to for-profit, changing the playing field for its AI model.

We would suggest that users take care and take some steps to reduce or manage their risk profile. For instance:

  1. Do your due diligence on AI tools: review terms of service, privacy policies, and data security practices before adopting any AI solution. Conduct a review to make sure you understand how the technology functions and what security measures are in place.
  2. Assess data privacy risks: consider how user data are collected, stored, and shared, particularly with parties or governments in foreign jurisdictions.
  3. Be careful with sensitive data: avoid inputting confidential or personal information into AI platforms if there is a chance that those AI platforms retain or reuse data. Consider AI usage policies and restrictions for your employees, contractors and suppliers.
  4. Monitor regulatory changes: governments are tightening AI-related security and privacy regulations, and probably for good reason.
  5. Repeat steps 1 to 4 regularly, and stay informed of changes in the AI landscape, or changes in your AI operator.

The DeepSeek ban is another reminder of the balance between the benefits and dangers of emerging technology. As AI evolves, so can the risks. The AI industry is rapidly evolving, and what seemed implausible only a month ago is now reality, and ethical or legal guardrails imposed by AI operators, and even governments such as the United States, can seemingly be changed at the stroke of a pen.

If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.


Authored by:
Antoine Pace, Partner
Eve Lillas, Senior Associate
Maria Wu, Seasonal Clerk


[1] Direction 001-2025 DeepSeek Products, Applications and Web Services

[2] DeepSeek Privacy Policy

[3] DeepSeek Privacy Policy

This update does not constitute legal advice and should not be relied upon as such. It is intended only to provide a summary and general overview on matters of interest and it is not intended to be comprehensive. You should seek legal or other professional advice before acting or relying on any of the content.

Get in touch