Need some MySQL help: backup/restore

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
So I have a legacy app running on MySQL.

I have been using PhpMyAdmin to pull down backups of the database.

The problem is
1) MySQL backup is just a sql script? dafuq?
2) Dumping large Blob tables is next to impossible.

I could theoretically just pull the blobs off the production DB with an external app, and dunp them right into a file folder, since we are going to have them separate from the DB moving forward anyways. The only problem with that is I'd like to avoid running anything off the prod DB right now.

So questions:
1) Is there something more reliable than PhpMyAdmin for pulling down DB backups? I have been using MySQL workbench to run queries but it freezes a lot and is an all around klunker.
2) Restoring the script via the command line on a dev box (Linux) sometimes freezes up... again due to huge files... any better ideas? Should I split up the tables more?
 

TechBoyJK

Lifer
Oct 17, 2002
16,701
60
91
Setup another mySQL DB and do realtime replication to it? Then do backups from that 2nd database server?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
1) Er, yes, that's perfectly normal. There are more bad things about SQL than I can name, but they did get some things right, and this was one of them.
2) Doing anything with large blob tables is next to impossible.

1) A cron job doing a dump, maybe?
2A) Is the import row by row, or batches of rows? If row by row, change the output so that it does it a table at a time, or in chunks of X rows at a time. If already doing that, try dropping indices, importing data, then recreating them.
2B) Verify that the import isn't disk-limited or network-limited. if it is, then there's no easy way out.
 

ForumMaster

Diamond Member
Feb 24, 2005
7,797
1
0
1) Er, yes, that's perfectly normal. There are more bad things about SQL than I can name, but they did get some things right, and this was one of them.
2) Doing anything with large blob tables is next to impossible.

1) A cron job doing a dump, maybe?
2A) Is the import row by row, or batches of rows? If row by row, change the output so that it does it a table at a time, or in chunks of X rows at a time. If already doing that, try dropping indices, importing data, then recreating them.
2B) Verify that the import isn't disk-limited or network-limited. if it is, then there's no easy way out.

1. maybe in mysql the backup is like that. everyother normal database has a perfectly functioning backup utlity (oracle, sqlserver, db2) that can and does allow full as well as incremental backups.

2. in general blobs aren't exactlly the best fit for a database. the database doesn't understand them, so its hands are usually tied. in oracle for example, until recently (12c), you couldn't use parallel with lobs.

usually though, you can use file system pointers. for example, oracle has a type called a bfile which can be used to store blobs externally and yet still acess them.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
1. maybe in mysql the backup is like that. everyother normal database has a perfectly functioning backup utlity (oracle, sqlserver, db2) that can and does allow full as well as incremental backups.
They all support outputting to an SQL script, don't they? I would hate to have to try to change DBMSes, or try to recover data that wouldn't re-import, with a weird binary format. Just compress it once it gets spat out. It doesn't support incremental, or didn't last I knew, though (P.S. they do, but the DB has to be set up for it, now that I Googled the issue).

The OP's problem was with using PHPMyAdmin without tweaking the PHP settings, I'd bet. It's not a management suite of the same caliber as commercial DB vendors, but just a web-based view to manipulate the basics, and check up on server and DB health. It's great to have, don't get me wrong, and can be a big a time-saver, but it's also pretty limited in some ways. It can probably be fixed by increasing the connection timeout, max file size, max script size, and maybe a few other settings, but the proper solution is to accept it and run backups in a terminal session.
 

Ka0t1x

Golden Member
Jan 23, 2004
1,724
0
71
As you should be running your MySQL database on Linux.. There's really not much better for backups than Holland.

http://hollandbackup.org/

Glorified cronjob with lots of config options. It'll run the mysqldump command for you, keep X amount of dumps in a folder.. etc.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
As you should be running your MySQL database on Linux.. There's really not much better for backups than Holland.

http://hollandbackup.org/

Glorified cronjob with lots of config options. It'll run the mysqldump command for you, keep X amount of dumps in a folder.. etc.
Looks good. I'm going to have to try it for a project I have with Postgres.
 

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
So finally circling back to this, ugh.

So I can't touch the prod DB. I can only migrate from backups. So any sort of replication is out of the question.

Current problem is the db is in one big file. MySQL workbench can't open the script because it's too big. I run it from the command line, and it craps out as soon as it hits one of the blobs.

I am considering downloading a large file viewer and possibly chopping up the file to run them separately. But when I tried before, it freezes as soon as you hit a line with a blob. Those viewers typically swap out cache on a line by line basis. The problem is a blob record is a single line that can be 30 million characters long.

I really feel like I am stuck here. The error I am getting is:
ERROR 2006 (HY000) at line 755: MySQL Server has gone away
 
Last edited:

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
small update:

"010 Editor" for windows is BY FAR the best app to handle super large text files.

It opens them via a binary stream, so doesn't care if a single line is millions of bytes long.

I was able to strip out giant chunks of blob data (will handle those later) and run the remaining script.

Not out of the woods yet though.
 

uclabachelor

Senior member
Nov 9, 2009
448
0
71
So I have a legacy app running on MySQL.

I have been using PhpMyAdmin to pull down backups of the database.

The problem is
1) MySQL backup is just a sql script? dafuq?
2) Dumping large Blob tables is next to impossible.

I could theoretically just pull the blobs off the production DB with an external app, and dunp them right into a file folder, since we are going to have them separate from the DB moving forward anyways. The only problem with that is I'd like to avoid running anything off the prod DB right now.

So questions:
1) Is there something more reliable than PhpMyAdmin for pulling down DB backups? I have been using MySQL workbench to run queries but it freezes a lot and is an all around klunker.
2) Restoring the script via the command line on a dev box (Linux) sometimes freezes up... again due to huge files... any better ideas? Should I split up the tables more?


1. Use mysql command line tools if you have SSH access. dump out the database, gzip it up, transfer to next server, unzip, dump it back in.

2. It's not freezing, it's just taking a long time. How huge is huge (db size wise, not the actual sql file)? On a really, really fast server with SSDs, a 4GB import takes about 10 minutes for reference.
 

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
1. Use mysql command line tools if you have SSH access. dump out the database, gzip it up, transfer to next server, unzip, dump it back in.

2. It's not freezing, it's just taking a long time. How huge is huge (db size wise, not the actual sql file)? On a really, really fast server with SSDs, a 4GB import takes about 10 minutes for reference.

Read the rest of the thread, it's actually crapping out (hence the error message)

Upon closer inspection with 010 Editor, it wasn't failing on the first blob. Out of the several thousand in there, it actually made it to #68 before throwing an error. Maybe that one is bigger than the previous ones. But it predictably fails on the same line every time.

When I COULD dump from the prod DB, I could dump tables separately, which makes it a lot easier. This is no longer an option, I only have access to the daily dumps. Which is one gzip of the whole shebang.
 

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
How are the production guys verifying the backups?

lol, they aren't. At least they haven't done so in a while (probably years)

Inherited this from another company. The dev team was in India and only one guy remains who understands the system. The IT team who hosts it here in the US also had a ton of turnover, several people have come and gone since there was someone who understood it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Since it fails on the same one every time, can you find it in the SQL file, comment it out, and see if it handles the rest?

Also the error looks to be an ODBC error, but I don't see why ODBC should be involved, from the command like of a console.
 

Train

Lifer
Jun 22, 2000
13,861
68
91
www.bing.com
Since it fails on the same one every time, can you find it in the SQL file, comment it out, and see if it handles the rest?

Also the error looks to be an ODBC error, but I don't see why ODBC should be involved, from the command like of a console.

from above:
I was able to strip out giant chunks of blob data (will handle those later) and run the remaining script.
 

uclabachelor

Senior member
Nov 9, 2009
448
0
71
I really feel like I am stuck here. The error I am getting is:
ERROR 2006 (HY000) at line 755: MySQL Server has gone away

Usually means the server timed out due to whatever reason...my bet is that it's running outta memory due to the amount of data being transferred.

try setting max_allowed_packet to a higher value and rerunning your scripts.
 

Red Squirrel

No Lifer
May 24, 2003
67,390
12,132
126
www.anyf.ca
I recently wrote this script which dumps all databases into their own tar.gz files. Feel free to use it:

Code:
#/bin/bash

user=$1;
pass=$2;
bakdest=$3;

#work from tmp so tar files don't have all sorts of folder based on destination specified.  we just want a file.
cd /tmp

#formulate dest path based on day of week and month

tmpmon=`date +%b`
tmpdow=`date +%a`

bakdest="${bakdest}/${tmpmon}/${tmpdow}/"

#put db list in temp file:
mysql -u ${user} -p${pass} -B -N -e "show databases;" | grep -iv "information_schema" | grep -iv "mysql" > dblist.tmp


#ensure dest folder exists or whole backup will fail
mkdir -p ${bakdest} >/dev/null


#iterate through file to backup those dbs:

while read p; do

date
echo "...dumping ${p}..."  
mysqldump -u ${user} -p${pass} -c --dump-date ${p} > ${p}.sql
echo "...taring..."
tar -czf ${bakdest}${p}.tar.gz ${p}.sql
rm ${p}.sql


done <dblist.tmp


#delete temp file
rm dblist.tmp

date
echo Done.

Usage: mysqlfullbackup.sh [user] [pass] [dest folder]

To restore you untar the database you want and do something like this:

mysql -u root -ppassword [dbname] < dbfile.sql

My script does NOT put the drop/create database line because this way it allows me to restore to any database name I want. You need to create that db first or truncate (empty) it if it already exists.

My old backup method involved just dumping all to one file, but when I wanted to restore to a new dev server it occured to me how much of a pain that is. You also don't want to overwrite the system databases as it could potentially cause issues. This new script lets me pick and choose the db to restore.