Hi, Jaco, On Jun 24, Jaco Kroon via developers wrote: <...>
Based on this I'm inferring that:
1. Contextually the difference between BINARY() and CHAR() types is the character set (ie, binary is implemented as a kind of characters set for strings, or perhaps strings are binaries with a specific character set).
Yes
2. The inet6 storage column (which I now realize is simply BINARY(16), with a transform applied to transform strings to and from the BINARY() format, I think the technical term in the code is "fixed binary storage storage").
I wouldn't say it's BINARY(16), it is stored in 16 bytes and its charset is binary, but it's no more BINARY(16) than BIGINT is BINARY(8).
3. When string (character) data gets sent *to* an inet6 column a "character set conversion" is performed, and as is the case with UDFs, it's already BINARY, so no conversion is actually performed, and since the BINARY length is now *not* 16 bytes, we get the above warning, and thus comparisons are as if with a NULL value, and always false.
Yes
I've been contemplating possible "solutions" to the problem without breaking backwards compatibility, and it's tricky. The below aims at a simple way of specifying the return "character set" of a string return in more detail. And possibly even looking at the character sets for parameters.
UDF is an ancient feature more that 25 years old, it was added way before character set support. It's rather basic. To fix that we've implemented function plugins in MariaDB. It's a bit more boilerplate code (well, notably more), but they can do anything a built-in function does. See https://github.com/MariaDB/server/tree/11.6/plugin/func_test Regards, Sergei Chief Architect, MariaDB Server and security@mariadb.org