It’s always good policy to backup all your files and databases whether you are running updates or making system changes, or simply just want a backup for safekeeping.

mysqldump -v -u [username] -p [database_name] > [backup_filename.sql]

mysql -v -u [username] -p [database_name] < [restore_filename.sql]

The -p switch will prompt you for you MySQL password. In case you have forgotten your MySQL password like I did, you canĀ see my previous post for instructions on how to reset your MYSQL password.

2 comments so far

Add Your Comment
  1. What do you do if your backups are too big to complete in a reasonable amount of time? Or if you can’t afford to have your tables locked while you’re dumping them?

  2. Unfortunately I only have limited experience with MySQL as my db is only 1.2MB but after a quick look through the mysqldump manual I came across the following switch options:

    –lock-tables, -l

    Lock all tables before starting the dump. The tables are locked with
    READ LOCAL to allow concurrent inserts in the case of MyISAM tables.
    For transactional tables such as InnoDB and BDB,
    –single-transaction is a much better option, because it does not
    need to lock the tables at all.


    –quick, -q

    This option is useful for dumping large tables. It forces mysqldump
    to retrieve rows for a table from the server a row at a time rather
    than retrieving the entire row set and buffering it in memory before
    writing it out.


    This option is shorthand; it is the same as specifying
    –add-drop-table –add-locks –create-options –disable-keys
    –extended-insert –lock-tables –quick –set-charset. It should
    give you a fast dump operation and produce a dump file that can be
    reloaded into a MySQL server quickly. As of MySQL 4.1, –opt is on
    by default, but can be disabled with –skip-opt. To disable only
    certain of the options enabled by –opt, use their –skip forms; for
    example, –skip-add-drop-table or –skip-quick.

    So if you are using a version of MySQL older than 4.1 something to play around with there.

    To understand why large mysql db backups can take a long time to run, a quick google search reveals that the main bottleneck during a mysql dump is the disk I/O bus so piping the mysqldump into a compression script (gzip)could help reduce the I/O required to save the dump, leaving more I/O for doing the reading required for the dump.

    mysqldump -v -u [username] -p [database_name] | gzip > [backup_filename.sql]

    The only other suggestions I have involve cost :( But, if you have a large budget, which hopefully is inline for such a large/critical db, you could investigate setting up a replication of the mysql db to a mysql slave node. You would then be able to take the slave offline to run your backups whilst maintaining your SLA’s.

    Im actually quite interested in trying this myself now… I will post about it here when I do :)