Re: [Maria-developers] a94502ab674: Added typedef decimal_digits_t (uint16) for number of decimals
Hi, Michael! On Mar 26, Michael Widenius wrote:
revision-id: a94502ab674 (mariadb-10.5.2-511-ga94502ab674) parent(s): 2be9b69f4ff author: Michael Widenius <michael.widenius@gmail.com> committer: Michael Widenius <michael.widenius@gmail.com> timestamp: 2021-03-24 14:19:55 +0200 message:
Added typedef decimal_digits_t (uint16) for number of decimals For fields and Item's uint8 should be good enough. After discussions with Alexander Barkov we choose uint16 (for now) as some format functions may accept +256 digits.
The reason for this patch was to make the storage of decimal digits simlar. Before this patch decimals was stored/used as uint8, int and uint.
Changed most decimal variables and functions to use the new typedef.
diff --git a/include/decimal.h b/include/decimal.h index cab18f99348..f713409ede1 100644 --- a/include/decimal.h +++ b/include/decimal.h @@ -53,10 +53,11 @@ int decimal2double(const decimal_t *from, double *to); int double2decimal(double from, decimal_t *to); int decimal_actual_fraction(const decimal_t *from); int decimal2bin(const decimal_t *from, uchar *to, int precision, int scale); -int bin2decimal(const uchar *from, decimal_t *to, int precision, int scale); +int bin2decimal(const uchar *from, decimal_t *to, int precision, + decimal_digits_t scale);
-int decimal_size(int precision, int scale); -int decimal_bin_size(int precision, int scale); +int decimal_size(int precision, decimal_digits_t scale); +int decimal_bin_size(int precision, decimal_digits_t scale);
this is *very* confusing. "decimal" in the context of an Item mean "number of digits after a dot" "decimal" in decimal.h means the whole number, and after a dot is "scale" decimal_digits_t in decimal.h does *not* mean what you want, I'm reviewing this your patch for the third time (first was in Sept, then in Dec). And only now I finally understood what you mean (I think). Please, please, don't use decimal_digits_t in decimal.h FYI, more confusing terminology thay I might need below "precision" for decimal numbers is the total number of digits in a number "precision" for temporal types means the number of digits after a dot
int decimal_result_size(decimal_t *from1, decimal_t *from2, char op, int param);
diff --git a/include/m_ctype.h b/include/m_ctype.h index 5fa8f28ff7a..66ca2bf4537 100644 --- a/include/m_ctype.h +++ b/include/m_ctype.h @@ -79,6 +79,7 @@ typedef const struct my_collation_handler_st MY_COLLATION_HANDLER; typedef const struct unicase_info_st MY_UNICASE_INFO; typedef const struct uni_ctype_st MY_UNI_CTYPE; typedef const struct my_uni_idx_st MY_UNI_IDX; +typedef uint16 decimal_digits_t;
This is Item (and Field) specific declaration, please put it in field.h or item.h
typedef struct unicase_info_char_st { diff --git a/sql/field.cc b/sql/field.cc index 52074417046..08c168e0e21 100644 --- a/sql/field.cc +++ b/sql/field.cc @@ -3281,10 +3281,11 @@ Field *Field_decimal::make_new_field(MEM_ROOT *root, TABLE *new_table, ** Field_new_decimal ****************************************************************************/
-static uint get_decimal_precision(uint len, uint8 dec, bool unsigned_val) +static decimal_digits_t get_decimal_precision(uint len, decimal_digits_t dec, + bool unsigned_val) { uint precision= my_decimal_length_to_precision(len, dec, unsigned_val); - return MY_MIN(precision, DECIMAL_MAX_PRECISION); + return (decimal_digits_t) MY_MIN(precision, DECIMAL_MAX_PRECISION);
No, this is wrong (or, rather, inconsistent with your other changes). "precision" for DECIMAL is the total number of digits, not a number of digits after the dot. Judging by your edits in decimal.h you don't want that to be decimal_digits_t.
}
Field_new_decimal::Field_new_decimal(uchar *ptr_arg, @@ -10390,7 +10391,8 @@ void Column_definition::create_length_to_internal_length_bit() void Column_definition::create_length_to_internal_length_newdecimal() { DBUG_ASSERT(length < UINT_MAX32); - uint prec= get_decimal_precision((uint)length, decimals, flags & UNSIGNED_FLAG); + decimal_digit_t prec= get_decimal_precision((uint)length, decimals, + flags & UNSIGNED_FLAG);
same as above, you should decide whether you want decimal precision to be decimal_digit_t or not. Currently this change contraditcs your decimal.h changes.
pack_length= my_decimal_get_binary_size(prec, decimals); }
diff --git a/sql/field.h b/sql/field.h index 4a4f7cee2a5..5b6a69d0075 100644 --- a/sql/field.h +++ b/sql/field.h @@ -2331,7 +2332,7 @@ class Field_decimal final :public Field_real { class Field_new_decimal final :public Field_num { public: /* The maximum number of decimal digits can be stored */ - uint precision; + decimal_digits_t precision;
oops. And here again you use decimal_digits_t for decimal precision.
uint bin_size; /* Constructors take max_length of the field as a parameter - not the diff --git a/sql/item.h b/sql/item.h index 1087c08869e..6753474f2dd 100644 --- a/sql/item.h +++ b/sql/item.h @@ -1649,14 +1649,14 @@ class Item: public Value_source, return type_handler()->Item_decimal_precision(this); } /* Returns the number of integer part digits only */ - inline int decimal_int_part() const - { return my_decimal_int_part(decimal_precision(), decimals); } + inline decimal_digits_t decimal_int_part() const + { return (decimal_digits_t) my_decimal_int_part(decimal_precision(), decimals); }
and here you use decimal_digits_t for precision-scale.
/* Returns the number of fractional digits only. NOT_FIXED_DEC is replaced to the maximum possible number of fractional digits, taking into account the data type. */ - uint decimal_scale() const + decimal_digits_t decimal_scale() const
for example, this is consistent with your other changes. scale is decimal_digits_t, it's clear.
{ return type_handler()->Item_decimal_scale(this); } @@ -4853,7 +4853,7 @@ class Item_temporal_literal :public Item_literal Item_literal(thd) { collation= DTCollation_numeric(); - decimals= dec_arg; + decimals= (decimal_digits_t) dec_arg;
why do you cast? I'd expect dec_arg to be decimal_digits_t already
}
int save_in_field(Field *field, bool no_conversions) override diff --git a/sql/item_func.cc b/sql/item_func.cc index a0ef4020aae..db51639a5af 100644 --- a/sql/item_func.cc +++ b/sql/item_func.cc @@ -2710,7 +2710,7 @@ my_decimal *Item_func_round::decimal_op(my_decimal *decimal_value) dec= INT_MIN;
if (!(null_value= (value.is_null() || args[1]->null_value || - value.round_to(decimal_value, (uint) dec, + value.round_to(decimal_value, (int) dec, truncate ? TRUNCATE : HALF_UP) > 1)))
I don't think you need to cast decimal_digits_t (aka uint16) to int explicitly. A compiler can handle it, it's not lossy.
return decimal_value; return 0; diff --git a/sql/item_func.h b/sql/item_func.h index e774d9c53bd..ae94698ff96 100644 --- a/sql/item_func.h +++ b/sql/item_func.h @@ -1383,10 +1383,10 @@ class Item_decimal_typecast :public Item_func { my_decimal decimal_value; public: - Item_decimal_typecast(THD *thd, Item *a, uint len, uint dec) + Item_decimal_typecast(THD *thd, Item *a, uint len, decimal_digits_t dec) :Item_func(thd, a) { - decimals= (uint8) dec; + decimals= (decimal_digits_t) dec;
not needed, dec is decimal_digits_t already
collation= DTCollation_numeric(); fix_char_length(my_decimal_precision_to_length_no_truncation(len, dec, unsigned_flag)); diff --git a/sql/sql_type.cc b/sql/sql_type.cc index 5a31b39c7b6..1c5e03f34bd 100644 --- a/sql/sql_type.cc +++ b/sql/sql_type.cc @@ -1218,9 +1218,10 @@ uint32 Type_numeric_attributes::find_max_octet_length(Item **item, uint nitems) }
-int Type_numeric_attributes::find_max_decimal_int_part(Item **item, uint nitems) +decimal_digits_t Type_numeric_attributes:: +find_max_decimal_int_part(Item **item, uint nitems) { - int max_int_part= 0; + decimal_digits_t max_int_part= 0;
again, "int parts" = precision-scale. it's not clear from your patch whether it should be decimal_digits_t
for (uint i=0 ; i < nitems ; i++) set_if_bigger(max_int_part, item[i]->decimal_int_part()); return max_int_part; @@ -1237,11 +1238,12 @@ Type_numeric_attributes::aggregate_numeric_attributes_decimal(Item **item, uint nitems, bool unsigned_arg) { - int max_int_part= find_max_decimal_int_part(item, nitems); + decimal_digits_t max_int_part= find_max_decimal_int_part(item, nitems);
same here
decimals= find_max_decimals(item, nitems); - int precision= MY_MIN(max_int_part + decimals, DECIMAL_MAX_PRECISION); + decimal_digits_t precision= (decimal_digits_t) + MY_MIN(max_int_part + decimals, DECIMAL_MAX_PRECISION);
and here
max_length= my_decimal_precision_to_length_no_truncation(precision, - (uint8) decimals, + decimals, unsigned_flag); }
@@ -6955,20 +6957,20 @@ const Vers_type_handler* Type_handler_blob_common::vers() const
/***************************************************************************/
-uint Type_handler::Item_time_precision(THD *thd, Item *item) const +decimal_digits_t Type_handler::Item_time_precision(THD *thd, Item *item) const
But here it's correct. "precision" in the temporal context is the same as "scale" in the decimal context and "decimals" in the Item/Field context. So, decimal_digits_t, all right.
{ return MY_MIN(item->decimals, TIME_SECOND_PART_DIGITS); }
-uint Type_handler::Item_datetime_precision(THD *thd, Item *item) const +decimal_digits_t Type_handler::Item_datetime_precision(THD *thd, Item *item) const { return MY_MIN(item->decimals, TIME_SECOND_PART_DIGITS); }
-uint Type_handler_string_result::Item_temporal_precision(THD *thd, Item *item, - bool is_time) const +decimal_digits_t Type_handler_string_result:: +Item_temporal_precision(THD *thd, Item *item, bool is_time) const { StringBuffer<64> buf; String *tmp; @@ -7020,7 +7022,7 @@ uint Type_handler_temporal_result::
/***************************************************************************/
-uint Type_handler_string_result::Item_decimal_precision(const Item *item) const +decimal_digits_t Type_handler_string_result::Item_decimal_precision(const Item *item) const
but here it's "decimal precision", so wrong/inconsistent again.
{ uint res= item->max_char_length(); /*
Regards, Sergei VP of MariaDB Server Engineering and security@mariadb.org
Hi!
Added typedef decimal_digits_t (uint16) for number of decimals For fields and Item's uint8 should be good enough. After discussions with Alexander Barkov we choose uint16 (for now) as some format functions may accept +256 digits.
The reason for this patch was to make the storage of decimal digits simlar. Before this patch decimals was stored/used as uint8, int and uint.
Changed most decimal variables and functions to use the new typedef.
diff --git a/include/decimal.h b/include/decimal.h index cab18f99348..f713409ede1 100644 --- a/include/decimal.h +++ b/include/decimal.h @@ -53,10 +53,11 @@ int decimal2double(const decimal_t *from, double *to); int double2decimal(double from, decimal_t *to); int decimal_actual_fraction(const decimal_t *from); int decimal2bin(const decimal_t *from, uchar *to, int precision, int scale); -int bin2decimal(const uchar *from, decimal_t *to, int precision, int scale); +int bin2decimal(const uchar *from, decimal_t *to, int precision, + decimal_digits_t scale);
-int decimal_size(int precision, int scale); -int decimal_bin_size(int precision, int scale); +int decimal_size(int precision, decimal_digits_t scale); +int decimal_bin_size(int precision, decimal_digits_t scale);
this is *very* confusing.
"decimal" in the context of an Item mean "number of digits after a dot" "decimal" in decimal.h means the whole number, and after a dot is "scale"
The intention with the patch was to have (over time) one type for all lengths associated with length-of-numbers. This include precision, scale, digits before '.', digits after '.' etc. I started with 'digits after dot', and then fixed other related things. I have now, by your request, created a full patch that fixes all of the above. <cut>
-static uint get_decimal_precision(uint len, uint8 dec, bool unsigned_val) +static decimal_digits_t get_decimal_precision(uint len, decimal_digits_t dec, + bool unsigned_val) { uint precision= my_decimal_length_to_precision(len, dec, unsigned_val); - return MY_MIN(precision, DECIMAL_MAX_PRECISION); + return (decimal_digits_t) MY_MIN(precision, DECIMAL_MAX_PRECISION);
No, this is wrong (or, rather, inconsistent with your other changes). "precision" for DECIMAL is the total number of digits, not a number of digits after the dot. Judging by your edits in decimal.h you don't want that to be decimal_digits_t.
It's not inconsistent or a contradiction, it is an extension ;) I agree that I should have mentioned in the commit comment that the new type is also used for precision in some places. Anyway, the new patch should fix our concerns. <cut>
{ return type_handler()->Item_decimal_scale(this); } @@ -4853,7 +4853,7 @@ class Item_temporal_literal :public Item_literal Item_literal(thd) { collation= DTCollation_numeric(); - decimals= dec_arg; + decimals= (decimal_digits_t) dec_arg;
why do you cast? I'd expect dec_arg to be decimal_digits_t already
Item_temporal_literal(THD *thd, uint dec_arg): Item_literal(thd) { collation= DTCollation_numeric(); decimals= (decimal_digits_t) dec_arg; } It was not. I will fix that. I only add cast when the compiler requires or asks for it..
diff --git a/sql/item_func.cc b/sql/item_func.cc index a0ef4020aae..db51639a5af 100644 --- a/sql/item_func.cc +++ b/sql/item_func.cc @@ -2710,7 +2710,7 @@ my_decimal *Item_func_round::decimal_op(my_decimal *decimal_value) dec= INT_MIN;
if (!(null_value= (value.is_null() || args[1]->null_value || - value.round_to(decimal_value, (uint) dec, + value.round_to(decimal_value, (int) dec, truncate ? TRUNCATE : HALF_UP) > 1)))
I don't think you need to cast decimal_digits_t (aka uint16) to int explicitly. A compiler can handle it, it's not lossy.
Here dec is calculated is a longlong, so it needs a cast. (We got this from a val_int() call). Note that in this particular place, dec can be negative and that is ok for value.round(). The orginal code was wrong. It should never have been casted to an int. <cut>
--- a/sql/item_func.h +++ b/sql/item_func.h @@ -1383,10 +1383,10 @@ class Item_decimal_typecast :public Item_func { my_decimal decimal_value; public: - Item_decimal_typecast(THD *thd, Item *a, uint len, uint dec) + Item_decimal_typecast(THD *thd, Item *a, uint len, decimal_digits_t dec) :Item_func(thd, a) { - decimals= (uint8) dec; + decimals= (decimal_digits_t) dec;
not needed, dec is decimal_digits_t already
thanks. I probably just replaced one cast, for which I got a warning, with another.
collation= DTCollation_numeric(); fix_char_length(my_decimal_precision_to_length_no_truncation(len, dec, unsigned_flag)); diff --git a/sql/sql_type.cc b/sql/sql_type.cc index 5a31b39c7b6..1c5e03f34bd 100644 --- a/sql/sql_type.cc +++ b/sql/sql_type.cc @@ -1218,9 +1218,10 @@ uint32 Type_numeric_attributes::find_max_octet_length(Item **item, uint nitems) }
-int Type_numeric_attributes::find_max_decimal_int_part(Item **item, uint nitems) +decimal_digits_t Type_numeric_attributes:: +find_max_decimal_int_part(Item **item, uint nitems) { - int max_int_part= 0; + decimal_digits_t max_int_part= 0;
again, "int parts" = precision-scale. it's not clear from your patch whether it should be decimal_digits_t
I assume it will be more clear in the new patch. Here is the new patch, on top of the old one. Regards, Monty
participants (2)
-
Michael Widenius
-
Sergei Golubchik