Hi, Sergei! On Dec 20, Sergei Petrunia wrote:
On Fri, Dec 14, 2012 at 12:26:24PM +0100, Sergei Golubchik wrote:
=== modified file 'sql/sql_join_cache.cc' --- sql/sql_join_cache.cc 2012-03-24 17:21:22 +0000 +++ sql/sql_join_cache.cc 2012-11-12 15:11:23 +0000 @@ -4543,7 +4546,7 @@ bool JOIN_CACHE_BKAH::prepare_look_for_m { last_matching_rec_ref_ptr= next_matching_rec_ref_ptr= 0; if (no_association && - (curr_matching_chain= get_matching_chain_by_join_key())) + !(curr_matching_chain= get_matching_chain_by_join_key())) //psergey: added '!'
Is that something that should be pushed into the main branch or it's a temporary hack for cassandra, that should be reverted?
No. It is a fix for a typo bug in BKA code. The bug can only be observed when BKA is used with "no-association" storage engine, like Cassandra SE, or Falcon. Probably, the 'psergey:' comment should be removed.
Yes, please, remove the comment.
+ColumnDataConverter *map_field_to_validator(Field *field, const char *validator_name) +{ + ColumnDataConverter *res= NULL; + + switch(field->type()) { + case MYSQL_TYPE_TINY: + if (!strcmp(validator_name, validator_boolean))
why do you strcmp here, while you're only check selected characters in get_cassandra_type() ? you could've called get_cassandra_type() for validator_name instead and compare enum values, instead of strings.
This function (written by me) was here before the get_cassandra_type() was written by Sanja.
I don't particularly like the idea of checking characters: what if the next version of Cassandra gets a new datatype? The code in this function will refuse to work with it, and produce a meaningful error message.
Agree. I actually commented on it in my review, but then removed it, thinking that if it's a bottleneck and you wanted it as fast as possible, then let's keep it ugly and fast.
Apparently, map_field_to_validator() is not the bottleneck. If connection setup is an issue, we need a connection pool. Why optimize map_field_to_validator to the point of unreadability?
No point whatsoever :)
+void store_key_image_to_rec(Field *field, uchar *ptr, uint len); + +int ha_cassandra::index_read_map(uchar *buf, const uchar *key, + key_part_map keypart_map, + enum ha_rkey_function find_flag) +{ + int rc= 0; + DBUG_ENTER("ha_cassandra::index_read_map"); + + if (find_flag != HA_READ_KEY_EXACT) + DBUG_RETURN(HA_ERR_WRONG_COMMAND);
did you verify that the server expects HA_ERR_WRONG_COMMAND from this method and correctly handles it? or, perhaps, find_flag is always HA_READ_KEY_EXACT here, because of your index_flags() ?
The latter is true. I don't expect that this function is ever invoked with find_flag!=HA_READ_KEY_EXACT.
please add a DBUG_ASSERT to document that Regards, Sergei