Thanks for knielsen's nice replay. I would like to add some comments

2016-10-21 22:31 GMT+08:00 Simon Mudd <simon.mudd@booking.com>:
Hi Kristian,

> On 21 Oct 2016, at 13:22, Kristian Nielsen <knielsen@knielsen-hq.org> wrote:
>
> Simon Mudd <simon.mudd@booking.com> writes:
>
>> I have not looked at the code in detail but a couple of questions
>> related to the implementation come to mind:
>
> The author is probably the best to answer, but I can tell what I know from
> reviewing the patch:
>
>> (1) Why not use a single compressed event type and encapsulate the whole original event inside ?
>
> This would result in higher overhead on each event. There is a fixed header
> to each event, which cannot be compressed (without significantly more
> changes to the code, at least). Thus, a single event encapsulating another
> full compressed event would require two headers, one uncompressed and one
> compressed.

Ok. I’ve been assuming the headers were small (from some casual browsing of things
related to the binlog router some time ago), but that may be wrong.
 
Now, the individual compressed event types inherit the original event types, so it may not have too many extra logic.
If use a single compressed event type and encapsulate the whole original event inside, 
I don't think it easy to do it without too much code modification. Such as:

bool Query_log_event::write()
{
  ...
  return write_header(event_length) ||
         write_data(buf, QUERY_HEADER_LEN) ||
         write_post_header_for_derived() ||
         write_data(start_of_status, (uint) (start-start_of_status)) ||
         write_data(safe_str(db), db_len + 1) ||
         write_data(query, q_len) ||
         write_footer();
}

And I am agree that it may bring higher overhead.
 
 
> There might be other pros and cons to each method, this was just what
> occured to me.
>
>> Is it not possible to have a "generic compressed event”? Mark mentions
>> allowing selection of the compression type and that seems useful and I
>> would expect the the compressed event to have this information and the
>> compressed content.
>
> The current patch has reserved three bits for the compression method. Thus,
> it would be possible in the future to add more compression types (up to a
> maximum of 8, at least).

Ok. good to know.
>
> Actually, it would be good if the code would check the compression type and
> fail if it is unknown. This will help get proper errors on old slaves if new
> compression types are added in the future.

Indeed, one of the things about the current binlog format is that there’s little
complete documentation outside of the code. Code changes and there’s no
clear specification. It makes things much better if what’s currently implicit is
explicit and also if the specs are outside of the code.  That’s something I
tried to mention to Oracle with the new X protocol and obviously out of scope of
this discussion but the intricacies of the binlog traffic are quite hard to follow
for mere mortals…

 
And the new code would check the  compression type. 
 
>
>> (2) Senders would try to compress the event as requested. If the
>> compressed event is not any smaller then do not bother compressing it,
>> just send the original event.
>
> This is not in the current patch, but it could be added easily enough. It
> might not be needed though, the log_bin_compress_min_len kind of serves a
> similar purpose.

It just saves the receiver doing extra work and I’d expect that allowing that
would be trivial even if not implemented now. Just a thought anyway.

 
As knielsen said, the main purpose of log_bin_compress_min_len is avoiding the too small binlog events.
 


>> Perhaps I am missing something but the need to compress events
>> individually probably requires quite a bit more logic and will not be
>> as extensible.
>
> Perhaps. In any case, the patch at hand is based on individually compressed
> events. Even with individual event types, the patch is nice and clean and
> leaves much of the existing logic unchanged.
>
> My guess is that this will primarily be useful for row-based binlogging,
> where reasonable compression rates can be expected in many cases where
> row-based binlogging has particularly high overhead compared to
> statement-based. Per-event compression is in any case of limited use for
> events that are small in size.

Sure so workload clearly affects the usefulness of a patch such as this one.
That said I’ve seen SBR traffic with very large packets as many bulk changes
are massaged to fit to max_allowed_packet, so here even in SBR you can
make a huge size saving if you have such statements.

Minimal RBR can help reduce some of the space concerns that RBR generates,
and I use it a lot (and requested this functionality) specifically for this reason.
However there are use cases where it can not be used (exporting data to
external systems that may need/want to have the whole “change log stream”)
so clearly compression here may be a good bit of extra functionality.

Fixing the case for RBR is good but I feel the focus may be too narrow,
especially if the approach can be used more generically.

I certainly have some SBR machines which generate large volumes of bin logs
and to be able to compress the events they generate on disk would be most helpful.

Thanks for sharing your thoughts.

Simon

SBR is also work in this feature.