Batch processing using Cron Jobs. Automated MySQL backup into another MySQL Database Openshift/K8s.

Daniel Izquierdo
2 min readMar 24, 2021

The other day I wrote an article of how to do a backup file of a MySQL database and save it in the pod that this database is running.

Now I want to update that post by automating the backup of our principal database and replicate the whole data automatically in other secondary database.

For doing so I created another database with persistent storage by taking advantage of the predefined templates in openshift whose database name is called the same as in the principal database. You can see it in the following picture:

Once I have my BackupDatabase created the only thig that will differ from the Cron Job, Role, RoleBinding and Service Account that were created in my last post is the command executed in the CronJob.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl exec -it $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}') -- sh /opt/app-root/src/backup/proceso.sh;echo 'Backup Process Complete.'
restartPolicy: OnFailure

Let me explain what we have done.

First we need to create a folder inside the pod of our principal database by typping the following commands:

oc rsh $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}')

Once inside the pod we execute

$ mkdir backup
$ chmod 755 backup
$ cd backup
$ pwd

We obtain the following response

/opt/app-root/src/backup/

Inside this folder we will create a sh file called proceso.sh with this content

/opt/rh/rh-mariadb102/root/usr/bin/mysqldump -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass --opt dbname | mysql --host=mariadbbackup -u userbackup --password=userbackuppass -C dbname

The we give permissions to this file

chmod 755 proceso.sh

When we have this file created we can now understand how our cronjob works.

kubectl exec -it  $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}') -- sh /opt/app-root/src/backup/proceso.sh;echo 'Backup Process Complete.'

The only thing our process do is to execute this .sh process inside the pod and the processs automatically replicate data into the secondary database.

Thanks a lot!

--

--