Am 08.12.22 um 17:22 schrieb Marko Mäkelä:
On Thu, Dec 8, 2022 at 5:41 PM Reindl Harald <h.reindl@thelounge.net> wrote:
<import the dumps as it's normal for postgresql> I think you mean mysql, not postgresql...
no, i mean postgresql, that piece of crap breaking for years after dist-upgrades because you need to do dump+restore at every version change - real fun because you need to remember about the dump *before*
The other side of the coin is a questionable InnoDB "optimization" that was finally fixed in MySQL 5.1.48 (in 2010) due to an external bug report. Heikki Tuuri firmly believed that zeroing out pages when they are initialized burns too many CPU cycles, so it is better to just write whatever garbage to any unused bytes in data pages. He actually forbade me to fix it back in 2004 or 2005, and forbade me to spend any time to prove what kind of bad things could happen. (Back then, even the FIL_PAGE_TYPE was written uninitialized to anything other than B-tree index pages. So, if the page type field said FIL_PAGE_INDEX, you could not say if it actually was an index page.)
WTF - but that don't change anything - you just can't dump+restore multi GB large databases in production environments and i expect from a new software version that it simply can read it's older data
I have tried to keep the implications in mind, but there has been at least one serious bug regarding this: https://jira.mariadb.org/browse/MDEV-27800
"cool" the only server where i care about InnoDB at all dates back to 2009
If a database with InnoDB had been originally initialized before MySQL 5.1.48 and you kept upgrading the binary files, you could have a similar ticking time-bomb in some other data structure, which might be forgotten when a future minor change to the data file format is made. This would not be caught in normal upgrade tests, which would initialize a database using a previous server version (say, 10.2) and then upgrade to 10.3. To catch something like this, you would have to initialize and populate the database in MySQL 5.1.47 or earlier, with a suitable usage pattern that actually causes some unused to contain suitably unlucky garbage bytes. (On a freshly initialized database they could easily be 0.)
pffff - when you can read the data for a dump you could re-create the innodb-files like due "alter/optimize table" online and when you are at it get rid of "ib_logfile0" and "ib_logfile1" which hold data even in file-per-table mode you can't cleanup from crap caused by a crash 13 years ago
MariaDB Server 10.4 introduced a new file format innodb_checksum_algorithm=full_crc32, and MariaDB Server 10.5 made it the default. Any files that were created when that setting is active are guaranteed to write any unused bytes as zeroes. It also fixes a peculiar design decision that some bytes of the page are not covered by any checksum, and that a page is considered valid if any of the non-full_crc32 checksums happen to produce a match. This includes the magic 0xdeadbeef for innodb_checksum_algorithm=none.
and what happens over the long with data that has no checksums at all because it was possible to completly disable them in the past?
Maybe we should consider eventually deprecating write support for the non-full_crc32 format, to force a fresh start.
pffff - when you can read the data for a dump you could re-create the innodb-files like due "alter/optimize table" online and when you are at it get rid of "ib_logfile0" and "ib_logfile1" which hold data even in file-per-table mode you can't cleanup from crap caused by a crash 13 years ago