Batch processing using Cron Jobs. Automated MySQL backup into another MySQL Database Openshift/K8s.

Daniel Izquierdo
2 min readMar 24, 2021

--

The other day I wrote an article of how to do a backup file of a MySQL database and save it in the pod that this database is running.

Now I want to update that post by automating the backup of our principal database and replicate the whole data automatically in other secondary database.

For doing so I created another database with persistent storage by taking advantage of the predefined templates in openshift whose database name is called the same as in the principal database. You can see it in the following picture:

Once I have my BackupDatabase created the only thig that will differ from the Cron Job, Role, RoleBinding and Service Account that were created in my last post is the command executed in the CronJob.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mariadump
namespace: my-namespace
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: mariadbdumpsa
containers:
- name: kubectl
image: garland/kubectl:1.10.4
command:
- /bin/sh
- -c
- kubectl exec -it $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}') -- sh /opt/app-root/src/backup/proceso.sh;echo 'Backup Process Complete.'
restartPolicy: OnFailure

Let me explain what we have done.

First we need to create a folder inside the pod of our principal database by typping the following commands:

oc rsh $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}')

Once inside the pod we execute

$ mkdir backup
$ chmod 755 backup
$ cd backup
$ pwd

We obtain the following response

/opt/app-root/src/backup/

Inside this folder we will create a sh file called proceso.sh with this content

/opt/rh/rh-mariadb102/root/usr/bin/mysqldump -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass --opt dbname | mysql --host=mariadbbackup -u userbackup --password=userbackuppass -C dbname

The we give permissions to this file

chmod 755 proceso.sh

When we have this file created we can now understand how our cronjob works.

kubectl exec -it  $(kubectl get pods | grep Running | grep 'mariadb-' | awk '{print $1}') -- sh /opt/app-root/src/backup/proceso.sh;echo 'Backup Process Complete.'

The only thing our process do is to execute this .sh process inside the pod and the processs automatically replicate data into the secondary database.

Thanks a lot!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Daniel Izquierdo
Daniel Izquierdo

Written by Daniel Izquierdo

Sharing what it takes time to learn.

No responses yet

Write a response