Hi!
Added typedef decimal_digits_t (uint16) for number of decimals For fields and Item's uint8 should be good enough. After discussions with Alexander Barkov we choose uint16 (for now) as some format functions may accept +256 digits.
The reason for this patch was to make the storage of decimal digits simlar. Before this patch decimals was stored/used as uint8, int and uint.
Changed most decimal variables and functions to use the new typedef.
diff --git a/include/decimal.h b/include/decimal.h index cab18f99348..f713409ede1 100644 --- a/include/decimal.h +++ b/include/decimal.h @@ -53,10 +53,11 @@ int decimal2double(const decimal_t *from, double *to); int double2decimal(double from, decimal_t *to); int decimal_actual_fraction(const decimal_t *from); int decimal2bin(const decimal_t *from, uchar *to, int precision, int scale); -int bin2decimal(const uchar *from, decimal_t *to, int precision, int scale); +int bin2decimal(const uchar *from, decimal_t *to, int precision, + decimal_digits_t scale);
-int decimal_size(int precision, int scale); -int decimal_bin_size(int precision, int scale); +int decimal_size(int precision, decimal_digits_t scale); +int decimal_bin_size(int precision, decimal_digits_t scale);
this is *very* confusing.
"decimal" in the context of an Item mean "number of digits after a dot" "decimal" in decimal.h means the whole number, and after a dot is "scale"
The intention with the patch was to have (over time) one type for all lengths associated with length-of-numbers. This include precision, scale, digits before '.', digits after '.' etc. I started with 'digits after dot', and then fixed other related things. I have now, by your request, created a full patch that fixes all of the above. <cut>
-static uint get_decimal_precision(uint len, uint8 dec, bool unsigned_val) +static decimal_digits_t get_decimal_precision(uint len, decimal_digits_t dec, + bool unsigned_val) { uint precision= my_decimal_length_to_precision(len, dec, unsigned_val); - return MY_MIN(precision, DECIMAL_MAX_PRECISION); + return (decimal_digits_t) MY_MIN(precision, DECIMAL_MAX_PRECISION);
No, this is wrong (or, rather, inconsistent with your other changes). "precision" for DECIMAL is the total number of digits, not a number of digits after the dot. Judging by your edits in decimal.h you don't want that to be decimal_digits_t.
It's not inconsistent or a contradiction, it is an extension ;) I agree that I should have mentioned in the commit comment that the new type is also used for precision in some places. Anyway, the new patch should fix our concerns. <cut>
{ return type_handler()->Item_decimal_scale(this); } @@ -4853,7 +4853,7 @@ class Item_temporal_literal :public Item_literal Item_literal(thd) { collation= DTCollation_numeric(); - decimals= dec_arg; + decimals= (decimal_digits_t) dec_arg;
why do you cast? I'd expect dec_arg to be decimal_digits_t already
Item_temporal_literal(THD *thd, uint dec_arg): Item_literal(thd) { collation= DTCollation_numeric(); decimals= (decimal_digits_t) dec_arg; } It was not. I will fix that. I only add cast when the compiler requires or asks for it..
diff --git a/sql/item_func.cc b/sql/item_func.cc index a0ef4020aae..db51639a5af 100644 --- a/sql/item_func.cc +++ b/sql/item_func.cc @@ -2710,7 +2710,7 @@ my_decimal *Item_func_round::decimal_op(my_decimal *decimal_value) dec= INT_MIN;
if (!(null_value= (value.is_null() || args[1]->null_value || - value.round_to(decimal_value, (uint) dec, + value.round_to(decimal_value, (int) dec, truncate ? TRUNCATE : HALF_UP) > 1)))
I don't think you need to cast decimal_digits_t (aka uint16) to int explicitly. A compiler can handle it, it's not lossy.
Here dec is calculated is a longlong, so it needs a cast. (We got this from a val_int() call). Note that in this particular place, dec can be negative and that is ok for value.round(). The orginal code was wrong. It should never have been casted to an int. <cut>
--- a/sql/item_func.h +++ b/sql/item_func.h @@ -1383,10 +1383,10 @@ class Item_decimal_typecast :public Item_func { my_decimal decimal_value; public: - Item_decimal_typecast(THD *thd, Item *a, uint len, uint dec) + Item_decimal_typecast(THD *thd, Item *a, uint len, decimal_digits_t dec) :Item_func(thd, a) { - decimals= (uint8) dec; + decimals= (decimal_digits_t) dec;
not needed, dec is decimal_digits_t already
thanks. I probably just replaced one cast, for which I got a warning, with another.
collation= DTCollation_numeric(); fix_char_length(my_decimal_precision_to_length_no_truncation(len, dec, unsigned_flag)); diff --git a/sql/sql_type.cc b/sql/sql_type.cc index 5a31b39c7b6..1c5e03f34bd 100644 --- a/sql/sql_type.cc +++ b/sql/sql_type.cc @@ -1218,9 +1218,10 @@ uint32 Type_numeric_attributes::find_max_octet_length(Item **item, uint nitems) }
-int Type_numeric_attributes::find_max_decimal_int_part(Item **item, uint nitems) +decimal_digits_t Type_numeric_attributes:: +find_max_decimal_int_part(Item **item, uint nitems) { - int max_int_part= 0; + decimal_digits_t max_int_part= 0;
again, "int parts" = precision-scale. it's not clear from your patch whether it should be decimal_digits_t
I assume it will be more clear in the new patch. Here is the new patch, on top of the old one. Regards, Monty