I have a python script that backs up my MySQL database to Amazon S3 buckets every hour. I use the script to simply call mysqldump
in order to create the dump and then upload it to the S3 bucket using tinys3
, note that I set lock-tables
to false so that transactions by other applications are not hindered.
Here is the script for your reference:
import tinys3
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings")
application = get_wsgi_application()
from django.utils import timezone
import pytz
import datetime
import json
timezone.activate(pytz.timezone("Asia/Kolkata"))
current_datetime = timezone.localtime(
datetime.datetime.utcnow().replace(tzinfo=pytz.utc)
)
dir_struct = '/'.join(current_datetime.strftime("%Y-%m-%d-%H-%M-%S").split('-'))
endpoint = 's3-us-west-2.amazonaws.com'
params = json.load(open('buckets.json'))
S3_ACCESS_KEY=params['S3_ACCESS_KEY']
S3_SECRET_KEY = params["S3_SECRET_KEY"]
bucket = params['mysql']
db_name = params['db_name']
mysql_command = 'sudo mysqldump --defaults-file=/home/ubuntu/.my.cnf --lock-tables=false %s > /home/ubuntu/%s.sql' %(db_name, db_name)
compress_command = "zip -r /home/ubuntu/%s.sql.zip /home/ubuntu/%s.sql" %(db_name, db_name)
delete_command = "sudo rm -rf /home/ubuntu/%s.sql*" %db_name
os.system(mysql_command)
os.system(compress_command)
backup_file = open('/home/ubuntu/%s.sql.zip' %db_name, 'rb')
conn = tinys3.Connection(S3_ACCESS_KEY, S3_SECRET_KEY, tls=True,endpoint=endpoint)
print conn.upload(
(dir_struct+'%s.sql.zip' %db_name),
backup_file,
bucket,
public=False
)
print conn.get((dir_struct+'%s.sql.zip' %db_name),bucket)
os.system(delete_command)
The problem is that when I start the cron job to run this script every hour, the server crashes after a few hours (say 5 to 7 hours). I haven't found a considerable reason for this behaviour yet. What is the problem here? Is there a fault in this script or something related to MySQL?