How to Back Up Kubernetes Cluster resources as YAML files

Kubernetes cluster administrators sometimes have the task of saving the resource configuration from the namespace and transferring it to another cluster or backing up an unstable test environment. A one-line script with the kubectl utility, fluently written in the terminal, copes with this task without any problems, but what if you are tired of wasting a couple of minutes of time each time rewriting the script. This is how the kube-dump utility appeared, in fact, it is a utility that can only do one thing — dump cluster resources.

Easy way to save Kubernetes deployment yaml without metadata

With this utility, you can save Kubernetes cluster resources as a pure yaml manifest without unnecessary metadata.

Key features:

  • Saving is done only for those resources to which you have read access.
  • You can pass a list of namespaces as an input, otherwise all available for your context will be used.
  • Both namespace resources and global cluster resources are subject to persistence.
  • You can use the utility locally as a regular script or run it in a container or in a kubernetes cluster, for example, as a CronJob.
  • It can create archives and rotate them after itself.
  • Can commit state to git repository and push to remote repository.
  • You can specify a specific list of cluster resources for unloading.

We pass the configuration for the utility using command line arguments or using exported environment variables or read from an .env file.

example of the utility to save all resources of one namespace

Beginning of work

The minimum required to run is to install kubectl and connect the cluster to it, as well as the jq and yq utilities. More details are available on the local run documentation page and command line arguments are described here or available with the --help command line argument.

Let’s run the utility to save all cluster resources:

./kube-dump dump

If you don’t want to deal with installing dependencies, you can use a container. More details about working with a container are described in this article.

Let’s get a fresh image and run the utility to save the dev and prod namespaces to the mounted /dump directory, where we pass our kubectl configuration file into the container to access the cluster.

docker pull woozymasta/kube-dump:latest
docker run --tty --interactive --rm \
--volume $HOME/.kube:/.kube \
--volume $HOME/dump:/dump \
woozymasta/kube-dump:latest \
dump-namespaces -n dev,prod -d /dump --kube-config /.kube/config

Installing CronJob on a cluster

Let’s consider a more complex example when the container will be launched in the cluster as a CronJob that will take a dump of resources every night and commit the edits to the git repository with subsequent sending to the remote repository. An example is described in detail in the article.

In this example, it is assumed that you have access to manage the roles of the cluster, because we will use ServiceAccount and the standard view role for operation. If the global view role is not suitable for you, you can create your own role for the namespace or cluster, you can take this example for help.

Let’s create a namespace where our CronJob will work and a ServiceAccount that will use the ClusterRoleBinding for the view role:

kubectl create ns kube-dump
kubectl -n kube-dump apply -f \

As an example, we will use authorization in GitLab using the OAuth token, so we will create a secret with the repository address and an authorization token:

kubectl -n kube-dump create secret generic kube-dump \

Before installing, configure the environment variables to suit your needs, in the example, by default, the copy mode of the dev and prod namespaces is set, followed by committing the changes in the my-cluster branch and sending to the remote repository.

Let’s set up a CronJob in which we indicate the frequency of the task launch:

schedule: "0 1 * * *"

Alternatively, install the example as it is and then edit it:

kubectl -n kube-dump apply -f \
kubectl -n kube-dump edit cronjobs.batch kube-dump

Plans for further development

  • Implement sending dumps to s3 compatible storage;
  • Sending notifications via email and webhook;
  • Git-crypt for encrypting sensitive data;
  • Autocompletion in Bash/Zsh;
  • OpenShift support.

I will also be glad to receive your comments and suggestions with ideas and criticism.


DevOps engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store