With the mysql program you can get the same information with the SHOW commands.
mysqlshow is invoked like this:
shell> mysqlshow [OPTIONS] [database [table [column]]]
• If no database is given, all matching databases are shown.
• If no table is given, all matching tables in the database are shown.
• If no column is given, all matching columns and column types in the table are shown.
Note that in newer MySQL versions, you only see those database/tables/columns for which
you have some privileges.
If the last argument contains a shell or SQL wild-card (*, ?, % or _) then only what’s
matched by the wild card is shown. This may cause some confusion when you try to display
the columns for a table with a _ as in this case mysqlshow only shows you the table names
that match the pattern. This is easily fixed by adding an extra % last on the command line
(as a separate argument).
Explaining Error Codes
For most system errors MySQL will, in addition to a internal text message, also print the
system error code in one of the following styles: message ... (errno: #) or message ...
You can find out what the error code means by either examining the documentation for
your system or use the perror utility.
perror prints a description for a system error code, or an MyISAM/ISAM table handler
perror is invoked like this:
shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]]
shell> perror 13 64
Error code 13: Permission decided
Error code 64: Machine is not on the network
Note that the error messages are mostly system dependent!
4.8.10 How to Run SQL Commands from a Text File
The mysql client typically is used interactively, like this:
shell> mysql database
However, it’s also possible to put your SQL commands in a file and tell mysql to read its
input from that file. To do so, create a text file ‘text_file’ that contains the commands
you wish to execute. Then invoke mysql as shown below:
shell> mysql database <> mysql <> ls -1 -t -r file_name.[0-9]* | xargs cat | mysql
ls is used to get all the log files in the right order.
This can be useful if you have to revert to backup files after a crash and you want to redo
the updates that occurred between the time of the backup and the crash.
4.9.4 The Binary Update Log
The intention is that the binary log should replace the update log, so we recommend you
to switch to this log format as soon as possible!
The binary log contains all information that is available in the update log in a more e cient
format. It also contains information about how long every query that updated the database
The binary log is also used when you are replicating a slave from a master.
When started with the --log-bin[=file_name] option, mysqld writes a log file containing
all SQL commands that update data. If no file name is given, it defaults to the name of
the host machine followed by -bin. If file name is given, but it doesn’t contain a path, the
file is written in the data directory.
If you supply an extension to --log-bin=filename.extension, the extension will be silenty
To the binary log filename mysqld will append an extension that is a number that is in-
cremented each time you execute mysqladmin refresh, execute mysqladmin flush-logs,
execute the FLUSH LOGS statement or restart the server.
You can use the following options to mysqld to a ect what is logged to the binary log:
binlog-do-db=database_name Tells the master it should log updates for the spec-
ified database, and exclude all others not explic-
itly mentioned. (Example: binlog-do-db=some_
database) binlog-ignore-db=database_name Tells the master that updates to the given database
should not be logged to the binary log (Example:
To be able to know which di erent binary log files have been used, mysqld will also create
a binary log index file that contains the name of all used binary log files. By default this
has the same name as the binary log file, with the extension ’.index’. You can change the
name of the binary log index file with the --log-bin-index=[filename] option.
If you are using replication, you should not delete old binary log files until you are sure that
no slave will ever need to use them. One way to do this is to do mysqladmin flush-logs
once a day and then remove any logs that are more than 3 days old.
You can examine the binary log file with the mysqlbinlog command. For example, you
can update a MySQL server from the binary log as follows:
mysqlbinlog log-file | mysql -h server_name
You can also use the mysqlbinlog program to read the binary log directly from a remote
mysqlbinlog --help will give you more information of how to use this program!
If you are using BEGIN [WORK] or SET AUTOCOMMIT=0, you must use the MySQL binary log for backups instead of the old update log.
The binary logging is done immediately after a query completes but before any locks are
released or any commit is done. This ensures that the log will be logged in the execution
All updates (UPDATE, DELETE or INSERT) that change a transactional table (like BDB tables)
are cached until a COMMIT. Any updates to a non-transactional table are stored in the binary
log at once. Every thread will, on start, allocate a bu er of binlog_cache_size to bu er
queries. If a query is bigger than this, the thread will open a temporary file to handle the
bigger cache. The temporary file will be deleted when the thread ends.
The max_binlog_cache_size can be used to restrict the total size used to cache a multi-
If you are using the update or binary log, concurrent inserts will not work together with
CREATE ... INSERT and INSERT ... SELECT. This is to ensure that you can recreate an
exact copy of your tables by applying the log on a backup.
The Slow Query Log
When started with the --log-slow-queries[=file_name] option, mysqld writes a log file
containing all SQL commands that took more than long_query_time to execute. The time
to get the initial table locks are not counted as execution time.
The slow query log is logged after the query is executed and after all locks has been released.
This may be di erent than the order in which the statements are executed.
If no file name is given, it defaults to the name of the host machine su xed with -slow.log.
If a filename is given, but doesn’t contain a path, the file is written in the data directory.
The slow query log can be used to find queries that take a long time to execute and are
thus candidates for optimisation. With a large log, that can become a di cult task. You
can pipe the slow query log through the mysqldumpslow command to get a summary of the
queries which appear in the log.
You are using --log-long-format then also queries that are not using indexes are printed.
Log File Maintenance
MySQL has a lot of log files which make it easy to see what is going. See Section 4.9 [Log
Files], page 281. One must however from time to time clean up after MysQL to ensure that
the logs don’t take up too much disk space.
When using MySQL with log files, you will, from time to time, want to remove/backup old
log files and tell MySQL to start logging on new files. See Section 4.4.1 [Backup], page 206.
On a Linux (Redhat) installation, you can use the mysql-log-rotate script for this. If you
installed MySQL from an RPM distribution, the script should have been installed automat-
ically. Note that you should be careful with this if you are using the log for replication!
On other systems you must install a short script yourself that you start from cron to handle
You can force MySQL to start using new log files by using mysqladmin flush-logs or by
using the SQL command FLUSH LOGS. If you are using MySQL Version 3.21 you must use
The above command does the following:
• If standard logging (--log) or slow query logging (--log-slow-queries) is used, closes
and reopens the log file (‘mysql.log’ and ‘‘hostname‘-slow.log’ as default).
• If update logging (--log-update) is used, closes the update log and opens a new log
file with a higher sequence number.
If you are using only an update log, you only have to ush the logs and then move away the
old update log files to a backup. If you are using the normal logging, you can do something
shell> cd mysql-data-directory
shell> mv mysql.log mysql.old
shell> mysqladmin flush-logs
and then take a backup and remove ‘mysql.old’.
4.10 Replication in MySQL
This chapter describes the various replication features in MySQL. It serves as a reference
to the options available with replication. You will be introduced to replication and learn
how to implement it. Towards the end, there are some frequently asked questions and
descriptions of problems and how to solve them.
One way replication can be used is to increase both robustness and speed. For robustness
you can have two systems and can switch to the backup if you have problems with the
master. The extra speed is achieved by sending a part of the non-updating queries to the
replica server. Of course this only works if non-updating queries dominate, but that is the
Starting in Version 3.23.15, MySQL supports one-way replication internally. One server
acts as the master, while the other acts as the slave. Note that one server could play the
roles of master in one pair and slave in the other. The master server keeps a binary log of
updates (see Section 4.9.4 [Binary log], page 282) and an index file to binary logs to keep
track of log rotation. The slave, upon connecting, informs the master where it left o since
the last successfully propagated update, catches up on the updates, and then blocks and
waits for the master to notify it of the new updates.
Note that if you are replicating a database, all updates to this database should be done
through the master!
Another benefit of using replication is that one can get live backups of the system by doing
a backup on a slave instead of doing it on the master. See Section 4.4.1 [Backup], page 206.
4.10.2 Replication Implementation Overview
MySQL replication is based on the server keeping track of all changes to your database
(updates, deletes, etc) in the binary log (see Section 4.9.4 [Binary log], page 282) and the
slave server(s) reading the saved queries from the master server’s binary log so that the
slave can execute the same queries on its copy of the data.
point in time (the moment you enable binary logging). Any slaves which you set up will
need copies of all the data from your master as it existed the moment that you enabled
binary logging on the master. If you start your slaves with data that doesn’t agree with
what was on the master when the binary log was started, your slaves may fail.
A future version (4.0) of MySQL will remove the need to keep a (possibly large) snapshot
of data for new slaves that you might wish to set up through the live backup functionality
with no locking required. However, at this time, it is necessary to block all writes either
with a global read lock or by shutting down the master while taking a snapshot.
Once a slave is properly configured and running, it will simply connect to the master and
wait for updates to process. If the master goes away or the slave loses connectivity with
your master, it will keep trying to connect every master-connect-retry seconds until it is
able to reconnect and resume listening for updates.
Each slave keeps track of where it left o . The master server has no knowledge of how many
slaves there are or which ones are up-to-date at any given time.
The next section explains the master/slave setup process in more detail.
How To Set Up Replication
Below is a quick description of how to set up complete replication on your current MySQL
server. It assumes you want to replicate all your databases and have not configured repli-
cation before. You will need to shutdown your master server brie y to complete the steps
1. Make sure you have a recent version of MySQL installed on the master and slave(s).
Use Version 3.23.29 or higher. Previous releases used a di erent binary log format and
had bugs which have been fixed in newer releases. Please, do not report bugs until you
have verified that the problem is present in the latest release.
2. Set up special a replication user on the master with the FILE privilege and permission to
connect from all the slaves. If the user is only doing replication (which is recommended),
you don’t need to grant any additional privileges.
For example, to create a user named repl which can access your master from any host,
you might use this command:
GRANT FILE ON *.* TO repl@"%" IDENTIFIED BY ’
3. Shut down MySQL on the master.
mysqladmin -u root -p
4. Snapshot all the data on your master server.
The easiest way to do this (on Unix) is to simply use tar to produce an archive of your
entire data directory. The exact data directory location depends on your installation.
tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dir
Windows users can use WinZip or similar software to create an archive of the data
5. In my.cnf on the master add log-bin and server-id=unique number to the [mysqld]
section and restart it. It is very important that the id of the slave is di erent from
Chapter 4: MySQL Database Administration
the id of the master. Think of server-id as something similar to the IP address - it
uniquely identifies the server instance in the community of replication partners.
6. Restart MySQL on the master.
7. Add the following to my.cnf on the slave(s):
replacing the values in <> with what is relevant to your system.
server-id must be di erent for each server participating in replication. If you don’t
specify a server-id, it will be set to 1 if you have not defined master-host, else it
will be set to 2. Note that in the case of server-id omission the master will refuse
connections from all slaves, and the slave will refuse to connect to a master. Thus,
omitting server-id is only good for backup with a binary log.
8. Copy the snapshot data into your data directory on your slave(s). Make sure that the
privileges on the files and directories are correct. The user which MySQL runs as needs
to be able to read and write to them, just as on the master.
9. Restart the slave(s).
After you have done the above, the slave(s) should connect to the master and catch up on
any updates which happened since the snapshot was taken.
If you have forgotten to set server-id for the slave you will get the following error in the
error log file:
Warning: one should set server_id to a non-0 value if master_host is set.
The server will not act as a slave.
If you have forgot to do this for the master, the slaves will not be able to connect to the
If a slave is not able to replicate for any reason, you will find error messages in the error
log on the slave.
Once a slave is replicating, you will find a file called master.info in the same directory
as your error log. The master.info file is used by the slave to keep track of how much of
the master’s binary log is has processed. Do not remove or edit the file, unless you really
know what you are doing. Even in that case, it is preferred that you use CHANGE MASTER TO
Replication Features and Known Problems
Below is an explanation of what is supported and what is not:
• Replication will be done correctly with AUTO_INCREMENT, LAST_INSERT_ID, and
• RAND() in updates does not replicate properly. Use RAND(some_non_rand_expr) if you
are replicating updates with RAND(). You can, for example, use UNIX_TIMESTAMP() for
the argument to RAND().
• You have to use the same character set (--default-character-set) on the master
and the slave. If not, you may get duplicate key errors on the slave, because a key that
is regarded as unique on the master may not be that in the other character set.
• LOAD DATA INFILE will be handled properly as long as the file still resides on the master
server at the time of update propagation. LOAD LOCAL DATA INFILE will be skipped.
• Update queries that use user variables are not replication-safe (yet).
• FLUSH commands are not stored in the binary log and are because of this not replicated
to the slaves. This is not normally a problem as FLUSH doesn’t change anything. This
does however mean that if you update the MySQL privilege tables directly without using
GRANT statement and you replicate the MySQL privilege database, you must do a FLUSH
PRIVILEGES on your slaves to put the new privileges into e ect.
• Temporary tables starting in 3.23.29 are replicated properly with the exception of the
case when you shut down slave server ( not just slave thread), you have some temporary
tables open, and the are used in subsequent updates. To deal with this problem, to
shut down the slave, do SLAVE STOP, then check Slave_open_temp_tables variable
to see if it is 0, then issue mysqladmin shutdown. If the number is not 0, restart the
slave thread with SLAVE START and see if you have better luck next time. There will
be a cleaner solution, but it has to wait until version 4.0. In earlier versions temporary
tables are not being replicated properly - we recommend that you either upgrade, or
execute SET SQL_LOG_BIN=0 on your clients before all queries with temp tables.
• MySQL only supports one master and many slaves. We will in 4.x add a voting algo-
rithm to automatically change master if something goes wrong with the current master.
We will also introduce ’agent’ processes to help doing load balancing by sending select
queries to di erent slaves.
• Starting in Version 3.23.26, it is safe to connect servers in a circular master-slave
relationship with log-slave-updates enabled. Note, however, that many queries will
not work right in this kind of setup unless your client code is written to take care of
the potential problems that can happen from updates that occur in di erent sequence
on di erent servers.
This means that you can do a setup like the following:
A -> B -> C -> A
This setup will only works if you only do non con icting updates between the tables.
In other words, if you insert data in A and C, you should never insert a row in A that
may have a con icting key with a row insert in C. You should also not update the sam
rows on two servers if the order in which the updates are applied matters.
Note that the log format has changed in Version 3.23.26 so that pre-3.23.26 slaves will
not be able to read it.
• If the query on the slave gets an error, the slave thread will terminate, and a message
will appear in the .err file. You should then connect to the slave manually, fix the
cause of the error (for example, non-existent table), and then run SLAVE START sql
command (available starting in Version 3.23.16). In Version 3.23.15, you will have to
restart the server.
• If connection to the master is lost, the slave will retry immediately, and then in case
of failure every master-connect-retry (default 60) seconds. Because of this, it is safe
to shut down the master, and then restart it after a while. The slave will also be able
to deal with network connectivity outages.
• Shutting down the slave (cleanly) is also safe, as it keeps track of where it left o .
Unclean shutdowns might produce problems, especially if disk cache was not synced
before the system died. Your system fault tolerance will be greatly increased if you
have a good UPS.
• If the master is listening on a non-standard port, you will also need to specify this with
master-port parameter in my.cnf .
• In Version 3.23.15, all of the tables and databases will be replicated. Starting in
Version 3.23.16, you can restrict replication to a set of databases with replicate-do-
db directives in my.cnf or just exclude a set of databases with replicate-ignore-db.
Note that up until Version 3.23.23, there was a bug that did not properly deal with
LOAD DATA INFILE if you did it in a database that was excluded from replication.
• Starting in Version 3.23.16, SET SQL_LOG_BIN = 0 will turn o replication (binary)
logging on the master, and SET SQL_LOG_BIN = 1 will turn in back on - you must have
the process privilege to do this.
• Starting in Version 3.23.19, you can clean up stale replication leftovers when some-
thing goes wrong and you want a clean start with FLUSH MASTER and FLUSH SLAVE
commands. In Version 3.23.26 we have renamed them to RESET MASTER and RESET
SLAVE respectively to clarify what they do. The old FLUSH variants still work, though,
• Starting in Version 3.23.21, you can use LOAD TABLE FROM MASTER for network backup
and to set up replication initially. We have recently received a number of bug reports
concerning it that we are investigating, so we recommend that you use it only in testing
until we make it more stable.
• Starting in Version 3.23.23, you can change masters and adjust log position with CHANGE
• Starting in Version 3.23.23, you tell the master that updates in certain databases should
not be logged to the binary log with binlog-ignore-db.
• Starting in Version 3.23.26, you can use replicate-rewrite-db to tell the slave to
apply updates from one database on the master to the one with a di erent name on
• Starting in Version 3.23.28, you can use PURGE MASTER LOGS TO ’log-name’ to get rid
of old logs while the slave is running.
4.10.5 Replication Options in my.cnf
If you are using replication, we recommend you to use MySQL Version 3.23.30 or later.
Older versions work, but they do have some bugs and are missing some features.
On both master and slave you need to use the server-id option. This sets an unique
replication id. You should pick a unique value in the range between 1 to 2^32-1 for each
master and slave. Example: server-id=3
The following table has the options you can use for the MASTER:
log-bin=filename Write to a binary update log to the specified location. Note
that if you give it a parameter with an extension (for exam-
ple, log-bin=/mysql/logs/replication.log ) versions up
to 3.23.24 will not work right during replication if you do
FLUSH LOGS . The problem is fixed in Version 3.23.25. If you
are using this kind of log name, FLUSH LOGS will be ignored
on binlog. To clear the log, run FLUSH MASTER, and do not
forget to run FLUSH SLAVE on all slaves. In Version 3.23.26
and in later versions you should use RESET MASTER and RESET
log-bin-index=filename Because the user could issue the FLUSH LOGS command, we
need to know which log is currently active and which ones
have been rotated out and in what sequence. This informa-
tion is stored in the binary log index file. The default is
‘hostname‘.index. You can use this option if you want to be
sql-bin-update-same If set, setting SQL_LOG_BIN to a value will automatically set
SQL_LOG_UPDATE to the same value and vice versa.
Tells the master that it should log updates to the binary log if
the current database is ’database name’. All others database
are ignored. Note that if you use this you should ensure that
you only do updates in the current database.
Tells the master that updates where the current database is
’database name’ should not be stored in the binary log. Note
that if you use this you should ensure that you only do updates
in the current database.
The following table has the options you can use for the SLAVE:
master-host=host Master hostname or IP address for replication. If not set, the
slave thread will not be started.
master-user=username The user the slave thread will us for authentication when con-
necting to the master. The user must have FILE privilege. If
the master user is not set, user test is assumed.
master-password=password The password the slave thread will authenticate with when
connecting to the master. If not set, an empty password is
master-port=portnumber The port the master is listening on. If not set, the compiled
setting of MYSQL_PORT is assumed. If you have not tinkered
with configure options, this should be 3306.
The number of seconds the slave thread will sleep before retry-
ing to connect to the master in case the master goes down or
the connection is lost. Default is 60.
master-ssl Turn SSL on
master-ssl-key Master SSL keyfile name
master-ssl-cert Master SSL certificate file name
master-info-file=filenameThe location of the file that remembers where we left o on
the master during the replication process. The default is mas-
ter.info in the data directory. Sasha: The only reason I see
for ever changing the default is the desire to be rebelious.
Tells the slave thread to restrict replication to the specified ta-
ble. To specify more than one table, use the directive multiple
times, once for each table. This will work for cross-database
updates, in contrast to replicate-do-db.
Tells the slave thread to not replicate to the specified table.
To specify more than one table to ignore, use the directive
multiple times, once for each table. This will work for cross-
datbase updates, in contrast to replicate-ignore-db.
Tells the slave thread to restrict replication to the tables that
match the specified wildcard pattern. To specify more than
one table, use the directive multiple times, once for each table.
This will work for cross-database updates.
Example: replicate-wild-do-table=foo%.bar% will
replicate only updates to tables in all databases that start
with foo and whose table names start with bar.
Tells the slave thread to not replicate to the tables that match
the given wild card pattern. To specify more than one table
to ignore, use the directive multiple times, once for each table.
This will work for cross-database updates.
Example: replicate-wild-ignore-table=foo%.bar% will
not do updates to tables in databases that start with foo and
whose table names start with bar.
Tells the slave thread to not replicate to the specified
database. To specify more than one database to ignore,
use the directive multiple times, once for each database.
This option will not work if you use cross database up-
dates. If you need cross database updates to work, make sure
you have 3.23.28 or later, and use replicate-wild-ignore-
Tells the slave thread to restrict replication to the speci-
fied database. To specify more than one database, use the
directive multiple times, once for each database. Note that
this will only work if you do not use cross-database queries
such as UPDATE some_db.some_table SET foo=’bar’ while
having selected a di erent or no database. If you need cross
database updates to work, make sure you have 3.23.28 or later,
and use replicate-wild-do-table=db_name.%
log-slave-updates Tells the slave to log the updates from the slave thread to the
binary log. O by default. You will need to turn it on if you
plan to daisy-chain the slaves.
Updates to a database with a di erent name than the original
skip-slave-start Tells the slave server not to start the slave on the startup.
The user can start it later with SLAVE START.
slave_read_timeout=# Number of seconds to wait for more data from the master
before aborting the read.
SQL Commands Related to Replication
Replication can be controlled through the SQL interface. Below is the summary of com-
SLAVE START Starts the slave thread. (Slave)
SLAVE STOP Stops the slave thread. (Slave)
SET SQL_LOG_BIN=0 Disables update logging if the user has process privilege. Ig-
nored otherwise. (Master)
SET SQL_LOG_BIN=1 Re-enables update logging if the user has process privilege.
Ignored otherwise. (Master)
Skip the next n events from the master. Only valid when the
slave thread is not running, otherwise, gives an error. Useful
for recovering from replication glitches.
RESET MASTER Deletes all binary logs listed in the index file, resetting the
binlog index file to be empty. In pre-3.23.26 versions, FLUSH
RESET SLAVE Makes the slave forget its replication position in the master
logs. In pre 3.23.26 versions the command was called FLUSH
LOAD TABLE tblname FROM
Downloads a copy of the table from master to the slave.
CHANGE MASTER TO
Changes the master parameters to the values specified in
master_def_list and restarts the slave thread. master_
def_list is a comma-separated list of master_def where
master_def is one of the following: MASTER_HOST, MASTER_
USER, MASTER_PASSWORD, MASTER_PORT, MASTER_CONNECT_
RETRY, MASTER_LOG_FILE, MASTER_LOG_POS. For example:
CHANGE MASTER TO
You only need to specify the values that need to be changed.
The values that you omit will stay the same with the exception
of when you change the host or the port. In that case, the
slave will assume that since you are connecting to a di erent
host or a di erent port, the master is di erent. Therefore,
the old values of log and position are not applicable anymore,
and will automatically be reset to an empty string and 0,
respectively (the start values). Note that if you restart the
slave, it will remember its last master. If this is not desirable,
you should delete the ‘master.info’ file before restarting, and
the slave will read its master from my.cnf or the command
SHOW MASTER STATUS Provides status information on the binlog of the master.
SHOW SLAVE STATUS Provides status information on essential parameters of the
slave thread. (Slave)
SHOW MASTER LOGS Only available starting in Version 3.23.28. Lists the binary
logs on the master. You should use this command prior to
PURGE MASTER LOGS TO to find out how far you should go.
PURGE MASTER LOGS TO
Available starting in Version 3.23.28. Deletes all the replica-
tion logs that are listed in the log index as being prior to the
specified log, and removed them from the log index, so that
the given log now becomes first. Example:
PURGE MASTER LOGS TO ’mysql-bin.010’
This command will do nothing and fail with an error if you
have an active slave that is currently reading one of the logs
you are trying to delete. However, if you have a dormant
slave, and happen to purge one of the logs it wants to read,
the slave will be unable to replicate once it comes up. The
command is safe to run while slaves are replicating - you do
not need to stop them.
You must first check all the slaves with SHOW SLAVE STATUS
to see which log they are on, then do a listing of the logs
on the master with SHOW MASTER LOGS, find the earliest log
among all the slaves (if all the slaves are up to date, this will
be the last log on the list), backup all the logs you are about
to delete (optional) and purge up to the target log.
Q: Why do I sometimes see more than one Binlog_Dump thread on the master after I have
restarted the slave?
A: Binlog_Dump is a continuous process that is handled by the server in the following way:
• Catch up on the updates.
• Once there are no more updates left, go into pthread_cond_wait(), from which we
can be awakened either by an update or a kill.
• On wake up, check the reason. If we are not supposed to die, continue the Binlog_dump
• If there is some fatal error, such as detecting a dead client, terminate the loop.
So if the slave thread stops on the slave, the corresponding Binlog_Dump thread on the
master will not notice it until after at least one update to the master (or a kill), which is
needed to wake it up from pthread_cond_wait(). In the meantime, the slave could have
opened another connection, which resulted in another Binlog_Dump thread.
The above problem should not be present in Version 3.23.26 and later versions. In Version
3.23.26 we added server-id to each replication server, and now all the old zombie threads
are killed on the master when a new replication thread connects from the same slave
Q: How do I rotate replication logs?
A: In Version 3.23.28 you should use PURGE MASTER LOGS TO command after determining
which logs can be deleted, and optionally backing them up first. In earlier versions the
process is much more painful, and cannot be safely done without stopping all the slaves in
the case that you plan to re-use log names. You will need to stop the slave threads, edit
the binary log index file, delete all the old logs, restart the master, start slave threads, and
then remove the old log files.
Q: How do I upgrade on a hot replication setup?
A: If you are upgrading pre-3.23.26 versions, you should just lock the master tables, let the
slave catch up, then run FLUSH MASTER on the master, and FLUSH SLAVE on the slave to
reset the logs, then restart new versions of the master and the slave. Note that the slave
can stay down for some time - since the master is logging all the updates, the slave will be
able to catch up once it is up and can connect.
After 3.23.26, we have locked the replication protocol for modifications, so you can upgrade
masters and slave on the y to a newer 3.23 version and you can have di erent versions of
MySQL running on the slave and the master, as long as they are both newer than 3.23.26.
Q: What issues should I be aware of when setting up two-way replication?
A: MySQL replication currently does not support any locking protocol between master and
slave to guarantee the atomicity of a distributed (cross-server) update. In in other words,
it is possible for client A to make an update to co-master 1, and in the meantime, before it
propagates to co-master 2, client B could make an update to co-master 2 that will make the
update of client A work di erently than it did on co-master 1. Thus when the update of
client A will make it to co-master 2, it will produce tables that will be di erent than what
you have on co-master 1, even after all the updates from co-master 2 have also propagated.
So you should not co-chain two servers in a two-way replication relationship, unless you are
sure that you updates can safely happen in any order, or unless you take care of mis-ordered
updates somehow in the client code.
You must also realise that two-way replication actually does not improve performance very
much, if at all, as far as updates are concerned. Both servers need to do the same amount
of updates each, as you would have one server do. The only di erence is that there will
be a little less lock contention, because the updates originating on another server will be
serialised in one slave thread. This benefit, though, might be o set by network delays.
Q: How can I use replication to improve performance of my system?
A: You should set up one server as the master, and direct all writes to it, and con-
figure as many slaves as you have the money and rackspace for, distributing the reads
among the master and the slaves. You can also start the slaves with --skip-bdb, --low-
priority-updates and --delay-key-write-for-all-tables to get speed improvements
for the slave. In this case the slave will use non-transactional MyISAM tables instead of BDB
tables to get more speed.
Q: What should I do to prepare my client code to use performance-enhancing replication?
A: If the part of your code that is responsible for database access has been properly ab-
stracted/modularised, converting it to run with the replicated setup should be very smooth
and easy - just change the implementation of your database access to read from some slave
or the master, and to always write to the master. If your code does not have this level
of abstraction, setting up a replicated system will give you an opportunity/motivation to
it clean up. You should start by creating a wrapper library /module with the following
safe_ means that the function will take care of handling all the error conditions.
You should then convert your client code to use the wrapper library. It may be a painful
and scary process at first, but it will pay o in the long run. All applications that follow
the above pattern will be able to take advantage of one-master/many slaves solution. The
code will be a lot easier to maintain, and adding troubleshooting options will be trivial.
You will just need to modify one or two functions, for example, to log how long each query
took, or which query, among your many thousands, gave you an error. If you have written
a lot of code already, you may want to automate the conversion task by using Monty’s
replace utility, which comes with the standard distribution of MySQL, or just write your
own Perl script. Hopefully, your code follows some recognisable pattern. If not, then you
are probably better o re-writing it anyway, or at least going through and manually beating
it into a pattern.
Note that, of course, you can use di erent names for the functions. What is important is
having unified interface for connecting for reads, connecting for writes, doing a read, and
doing a write.
Q: When and how much can MySQL replication improve the performance of my system?
A: MySQL replication is most beneficial for a system with frequent reads and not so frequent
writes. In theory, by using a one master/many slaves setup you can scale by adding more
slaves until you either run out of network bandwidth, or your update load grows to the
point that the master cannot handle it.
In order to determine how many slaves you can get before the added benefits begin to
level out, and how much you can improve performance of your site, you need to know your
query patterns, and empirically (by benchmarking) determine the relationship between the
throughput on reads (reads per second, or max_reads) and on writes max_writes) on a
typical master and a typical slave. The example below will show you a rather simplified
calculation of what you can get with replication for our imagined system.
Let’s say our system load consists of 10% writes and 90% reads, and we have determined
that max_reads = 1200 - 2 * max_writes, or in other words, our system can do 1200 reads
per second with no writes, our average write is twice as slow as average read, and the
relationship is linear. Let us suppose that our master and slave are of the same capacity,
and we have N slaves and 1 master. Then we have for each server (master or slave):
reads = 1200 - 2 * writes (from bencmarks)
reads = 9* writes / (N + 1) (reads split, but writes go to all servers)
9*writes/(N+1) + 2 * writes = 1200
writes = 1200/(2 + 9/(N+1)
So if N = 0, which means we have no replication, our system can handle 1200/11, about
109 writes per second (which means we will have 9 times as many reads due to the nature
of our application).
If N = 1, we can get up to 184 writes per second.
If N = 8, we get up to 400.
If N = 17, 480 writes.
Eventually as N approaches infinity (and our budget negative infinity), we can get very
close to 600 writes per second, increasing system throughput about 5.5 times. However,
with only 8 servers, we increased it almost 4 times already.
Note that our computations assumed infinite network bandwidth, and neglected several
other factors that could turn out to be significant on your system. In many cases, you may
not be able to make a computation similar to the one above that will accurately predict
what will happen on your system if you add N replication slaves. However, answering the
following questions should help you decided whether and how much, if at all, the replication
will improve the performance of your system:
• What is the read/write ratio on your system?
• How much more write load can one server handle if you reduce the reads?
• How many slaves do you have bandwidth for on your network?
Q: How can I use replication to provide redundancy/high availability?
A: With the currently available features, you would have to set up a master and a slave (or
several slaves), and write a script that will monitor the master to see if it is up, and instruct
your applications and the slaves of the master change in case of failure. Some suggestions:
• To tell a slave to change the master use the CHANGE MASTER TO command.
• A good way to keep your applications informed where the master is by having a dynamic
DNS entry for the master. With bind you can use nsupdate to dynamically update
• You should run your slaves with the log-bin option and without log-slave-updates.
This way the slave will be ready to become a master as soon as you issue STOP SLAVE;
RESET MASTER, and CHANGE MASTER TO on the other slaves. It will also help you catch
spurious updates that may happen because of misconfiguration of the slave (ideally,
you want to configure access rights so that no client can update the slave, except for
the slave thread) combined with the bugs in your client programs (they should never
update the slave directly).
We are currently working on integrating an automatic master election system into MySQL,
but until it is ready, you will have to create your own monitoring tools.
4.10.8 Troubleshooting Replication
If you have followed the instructions, and your replication setup is not working, first elimi-
nate the user error factor by checking the following:
• Is the master logging to the binary log? Check with SHOW MASTER STATUS. If it is,
Position will be non-zero. If not, verify that you have given the master log-bin
option and have set server-id.
• Is the slave running? Check with SHOW SLAVE STATUS. The answer is found in Slave_
running column. If not, verify slave options and check the error log for messages.
• If the slave is running, did it establish connection with the master? Do SHOW
PROCESSLIST, find the thread with system user value in User column and none
in the Host column, and check the State column. If it says connecting to master,
verify the privileges for the replication user on the master, master host name, your
DNS setup, whether the master is actually running, whether it is reachable from the
slave, and if all that seems okay, read the error logs.
• If the slave was running, but then stopped, look at SHOW SLAVE STATUS output
and check the error logs. It usually happens when some query that succeeded on the
master fails on the slave. This should never happen if you have taken a proper snapshot
of the master, and never modify the data on the slave outside of the slave thread. If it
does, it is a bug, read below on how to report it.
• If a query on that succeeded on the master refuses to run on the slave, and a full
database resync ( the proper thing to do ) does not seem feasible, try the following:
- First see if there is some stray record in the way. Understand how it got there,
then delete it and run SLAVE START
- If the above does not work or does not apply, try to understand if it would be safe
to make the update manually ( if needed) and then ignore the next query from the
- If you have decided you can skip the next query, do SET SQL_SLAVE_SKIP_
COUNTER=1; SLAVE START; to skip a query that does not use auto increment,
or last insert id or SET SQL_SLAVE_SKIP_COUNTER=2; SLAVE START; otherwise.
The reason auto increment/last insert id queries are di erent is that they take
two events in the binary log of the master.
- If you are sure the slave started out perfectly in sync with the master, and no one
has updated the tables involved outside of slave thread, report the bug, so you will
not have to do the above tricks again.
• Make sure you are not running into an old bug by upgrading to the most recent version.
• If all else fails, read the error logs. If they are big, grep -i slave /path/to/your-
log.err on the slave. There is no generic pattern to search for on the master, as the
only errors it logs are general system errors - if it can, it will send the error to the slave
when things go wrong.
When you have determined that there is no user error involved, and replication still either
does not work at all or is unstable, it is time to start working on a bug report. We need to
get as much info as possible from you to be able to track down the bug. Please do spend
some time and e ort preparing a good bug report. Ideally, we would like to have a test case
in the format found in mysql-test/t/rpl* directory of the source tree. If you submit a
test case like that, you can expect a patch within a day or two in most cases, although, of
course, you mileage may vary depending on a number of factors.
Second best option is a just program with easily configurable connection arguments for the
master and the slave that will demonstrate the problem on our systems. You can write one
in Perl or in C, depending on which language you know better.
If you have one of the above ways to demonstrate the bug, use mysqlbug to prepare a bug
report and send it to email@example.com. If you have a phantom - a problem that does
occur but you cannot duplicate "at will":
• Verify that there is no user error involved. For example, if you update the slave outside
of the slave thread, the data will be out of sync, and you can have unique key violations
on updates, in which case the slave thread will stop and wait for you to clean up the
tables manually to bring them in sync.
• Run slave with log-slave-updates and log-bin - this will keep a log of all updates
on the slave.
• Save all evidence before resetting the replication. If we have no or only sketchy infor-
mation, it would take us a while to track down the problem. The evidence you should
- All binary logs on the master
- All binary log on the slave
- The output of SHOW MASTER STATUS on the master at the time you have discovered
- The output of SHOW SLAVE STATUS on the master at the time you have discovered
- Error logs on the master and on the slave
• Use mysqlbinlog to examine the binary logs. The following should be helpful to find
the trouble query, for example:
mysqlbinlog -j pos_from_slave_status /path/to/log_from_slave_status | head
Once you have collected the evidence on the phantom problem, try hard to isolate it into a
separate test case first. Then report the problem to firstname.lastname@example.org with as much
info as possible.
| || |