Hank -
A very similar idea has been implemented in XtraDB of Percona Server
5.7, see "Parallel Doublewrite" at
https://www.percona.com/doc/percona-server/5.7/ performance/xtradb_ performance_improvements_for_ io-bound_highly-concurrent_ workloads.html
AFAIK, this feature is not in XtraDB of MariaDB as of today.
> ______________________________
2017-02-09 9:32 GMT+02:00 Hank Lyu <hanklgs9564@gmail.com>:
> Hello:
>
> In XtraDB, if we adjust the size of doublewrite buffer and relative variable
> (i.e. srv_doublewrite_batch_size) , in theory, we can get better throughput
> when buffer pool flush to disk.
>
> I wonder that why doublewrite buffer size is 2 block and each flush page
> number is 120 (decided by srv_doublewrite_batch_size), instead doublewrite
> size is 8 and each flush page number is 500 or more?
> Is it worry about that when doing flush will occupy too much resource or
> other reason?
>
> Best Regard
> Hank Lyu
>
>
_________________
> Mailing list: https://launchpad.net/~maria-developers
> Post to : maria-developers@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~maria-developers
> More help : https://help.launchpad.net/ListHelp
>
--
Laurynas
_______________________________________________
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help : https://help.launchpad.net/ListHelp