mysql binlog replay / max_allowed_packet
Trying to replay MySQL binlogs? And running into max_allowed_packet errors? Fear not. Likely this is not corruption, but packets that really are that big.
In 2021 I wrote about mariabackup and selective table restore. That blog entry shows how one might restore a mariabackup-saved snapshot of a MariaDB (MySQL) database. Specifically, it focuses on recovering only one or two tables selectively and it details optional GPG decryption and decompression.
Here's a recap of the mandatory steps for full recovery and additionally how to replay binlogs to a specific point in time.
Recap snapshot restore
Assume we're starting with a directory with the snapshot of the DB, already decompressed. It might look like this:
$ ls -F1 /path/to/snapshot
aria_log.00000001
aria_log_control
backup-my.cnf
ib_buffer_pool
ibdata1
ib_logfile0
my_project_1/
my_project_2/
my_project_3/
mysql/
performance_schema/
xtrabackup_binlog_info
xtrabackup_checkpoints
xtrabackup_info
Move the log to /var/log/mysql
, the rest to /var/lib/mysql
and give
the files the correct ownership (mysql:mysql
).
# mv /path/to/snapshot/ib_logfile* /var/log/mysql/
# mv /path/to/snapshot/* /var/lib/mysql/
# chown -R mysql:mysql /var/lib/mysql/ /var/log/mysql/
From there, become the mysql user and “prepare” the data.
$ cd /var/lib/mysql
$ mariabackup --prepare --use-memory=20G --target-dir=.
This might take some time, depending on how large the snapshot is.
Then, start the database daemon — optionally with
--skip-grant-tables
if you don't have credentials on hand.
NOTE: If you're restoring something live, make sure clients cannot start connecting before you're all done with the restore.
Replaying binlogs
Time to replay the binlogs.
$ ls -F1 /path/to/binlogs
mariadb-bin.079217
mariadb-bin.079218
mariadb-bin.079219
mariadb-bin.079220
mariadb-bin.079221
mariadb-bin.079222
Check in xtrabackup_binlog_info
where we should start.
$ cat /var/lib/mysql/xtrabackup_binlog_info
mariadb-bin.079219 93481903 0-1-2297759094
Dump the binlogs to stdout and feed that to mysql. You want
--binary-mode
in case there are binary blobs in them.
(ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode.
)
$ mysqlbinlog --start-position=93481903 \
/path/to/binlogs/mariadb-bin.{079219,079220,079221,079222} | \
mysql --binary-mode
If all goes well, this replays all the log statements from the binlog, starting from the last position seen in the snapshot.
... unless you run into this:
ERROR 2006 (HY000) at line 21267838: MySQL server has gone away
... and see this in the error.log
:
[Warning] Aborted connection 9 to db:
'unconnected' user: 'root' host: 'localhost'
(Got a packet bigger than 'max_allowed_packet' bytes)
Darn. An error, and we don't even know which binlog caused it.
And also: why would packets be bigger than they were when they were originally ran? If the MariaDB configuration is unchanged, then surely replaying should work equally well as the original playing. Right?
Wrong.
binlog_format
The binary log (binlog) records changes in the database either as statements or as changed rows. Recording statements can be cheaper in terms of space but riskier in the face of simultaneous updates. Recording changed rows is cheaper to process (because indexed) and double checks data integrity across nodes (if there's an update to a row you didn't have, you can abort early and tell the user to fix the corruption).
If we look at a binlog by passing it to mysqlbinlog
, an UPDATE
might
look like this:
UPDATE aboutconfig_item SET datatype_id = 2 WHERE datatype_id = 1
That's about 65 bytes of packet size. And for binlog-format=STATEMENT
style binary logging, that is about all the space it takes.
But that simple query can affect many rows, and when they are logged as
binlog-format=ROW
(and sometimes MIXED
) they are instead recorded as
follows:
# at 4112460
#240505 14:51:46 server id 1 end_log_pos 4112460 CRC32 0x91652288 Annotate_rows:
#Q> UPDATE aboutconfig_item SET datatype_id = 2 WHERE datatype_id = 1
#240505 14:51:46 server id 1 end_log_pos 4112537 CRC32 0x55707891 Table_map: `example_db`.`aboutconfig_item` mapped to number 799
# at 4112537
# at 4120607
# at 4128557
#240505 14:51:46 server id 1 end_log_pos 4120607 CRC32 0xcd652823 Update_rows: table id 799
#240505 14:51:46 server id 1 end_log_pos 4128557 CRC32 0xebf94293 Update_rows: table id 799
#240505 14:51:46 server id 1 end_log_pos 4133783 CRC32 0x0275ce04 Update_rows: table id 799 flags: STMT_END_F
BINLOG '
4oA3ZhMBAAAATQAAAJnAPgAAAB8DAAAAAAEACmV4YW1wbGVfZGIAEGFib3V0Y29uZmlnX2l0ZW0A
BRISAw/8BQAA/AAEAJF4cFU=
4oA3ZhgBAAAAhh8AAB/gPgAAAB8DAAAAAAAABf//4JmIiOkPmYiI6Q8CAAAADGF1dG9sb2cucGF0
...
HGludm9pY2UucGF5bWVudC10ZXJtLmRlZmF1bHQCAAAAMzDoGS0h
'/*!*/;
# Number of rows: 54
# at 4112299
#240505 14:51:43 server id 1 end_log_pos 4112330 CRC32 0xc9363528 Xid = 271936
COMMIT/*!*/;
The #Q>
is purely a comment. The BINLOG
statement holds the data of
all affected rows. And its “packet size” is significantly larger than 65
bytes.
If you add verbosity (-v
) to mysqlbinlog
you'll also see:
### UPDATE `example_db`.`aboutconfig_item`
### WHERE
### @1='2011-03-04 14:36:15'
### @2='2011-03-04 14:36:15'
### @3=2
### @4='autolog.path'
### @5='/srv/data/example/logs'
### SET
### @1='2011-03-04 14:36:15'
### @2='2011-03-04 14:36:15'
### @3=1
### @4='autolog.path'
### @5='/srv/data/example/logs'
This means you get one of those — fairly efficiently stored, but still — for every one of those 54 affected rows. You can see how that might add up if you're doing a coarse update on a big table.
A statement (query) might be large, like an UPDATE
with a very long
list of WHERE id IN (1, 2, 3, 4, .... 99999)
. But that is nothing
compared to the size of affected data for some queries. This toy example
of 54 rows already produces a BINLOG
statement of about 30 kilobytes.
max_allowed_packet
When you're using Galera wsrep replication — like we do — then
that replication also uses a form of row based replication. The
changes get recorded in the binlog as those BINLOG
statements we saw
above. And, as we've just seen, those statements can be quite large.
And when feeding those statements to the mysql
client and the mysqld
server they are bound by their respective max_allowed_packet
settings.
During regular operation, wsrep itself is not limited by that setting. However, when you're replaying binlogs, the setting suddenly becomes important.
After running into the
Got a packet bigger than 'max_allowed_packet' bytes
error, we
whipped up a little script to run on the binlogs that appeared to be
problematic:
#!/usr/bin/env python3
# Take mysqlbinlog input. Show longest lines (queries) and longest blocks of
# data (rows to update). Set max_allowed_packet to a higher value.
import sys
from collections import namedtuple
Stats = namedtuple('Stats', 'line blocksize linelength')
def numsep(value):
"1234567 -> '1_234_567'"
value = str(value)[-1::-1]
return '_'.join(value[i:i+3] for i in range(0, len(value), 3))[-1::-1]
def count(fp):
blockstart = linelength = blocksize = -1
for lineno, line in enumerate(fp, start=1):
if line.startswith(b'# at '):
if blocksize != -1:
yield Stats(
line=blockstart, blocksize=blocksize,
linelength=linelength)
blockstart = lineno
blocksize = 0
blocksize += len(line)
linelength = max(linelength, len(line))
if blocksize != -1:
yield Stats(
line=blockstart, blocksize=blocksize, linelength=linelength)
def main(fp):
maxblocksize, maxlinelength = -1, -1
maxblocksizepos, maxlinelengthpos = -1, -1
for stats in count(fp):
if stats.blocksize > maxblocksize:
maxblocksize = stats.blocksize
maxblocksizepos = stats.line
if stats.linelength > maxlinelength:
maxlinelength = stats.linelength
maxlinelengthpos = stats.line
print('largest block at line', maxblocksizepos, ':', numsep(maxblocksize))
print('longest line at line', maxlinelengthpos, ':', numsep(maxlinelength))
if __name__ == '__main__':
if sys.stdin.isatty():
print('please feed me mysqlbinlog output', file=sys.stderr)
exit(1)
main(sys.stdin.buffer)
Running this on the aforementioned files got us this:
$ mysqlbinlog /path/to/binlogs/mariadb-bin.079219 | ./binlogstats.py
largest block at line 13099650 : 5_655_846
longest line at line 13099646 : 4_186_803
$ mysqlbinlog /path/to/binlogs/mariadb-bin.079220 | ./binlogstats.py
largest block at line 10869443 : 130_846_886
longest line at line 17253384 : 4_708_572
Some of these statements might be very long — up to 5 megabytes — but the row updates can apparently be huge — up to 130 megabytes.
In this case, we determined that slightly over 130MiB was the
max_allowed_packet
we needed, significantly more than the value we had
configured. And indeed, increasing it to 256MiB for good measure made
the replay succeed.