Re: [Maria-developers] Sachin weekly report
Hello Sergei! Sir I am stuck at one problem. Consider the case create table t1 (abc blob unique, xyz int unique); /* Insert data */ update t1 set abc = 33 where xyz =23; This is not working because xyz = 23 will require index scan and at the time of ha_update_row the inited != NONE and if inited != NONE then ha_index_read_idx_map would fail so how can I update row in case of blob unique ? In this case I think the only option left is sequential scan but that will defeat the purpose of blob unique. Regards sachin On Thu, Jul 28, 2016 at 1:10 PM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Sachin!
On Jul 27, Sergei Golubchik wrote:
Please review branch https://github.com/SachinSetiya/server/tree/unique_index_where
Sure. Will do.
Sorry. Correction :(
I'm on vacations two weeks in August, 9th to 22nd. And I have an awful lot to do for 10.2. Naturally I won't work on 10.2 on vacations, but I will take my laptop with me and I will answer your questions and review your code. Which also means that I'd better use every minute before vacations to work on 10.2, you see...
So, I'll review you patch in a couple of weeks. Until then I can only answer questions, sorry :( And when on vacations, mind it, I will be only rarely on irc, so if I'm not there, do not wait for me, send an email, please. I will reply to your emails and I will do my reviews.
By the way, did you do a full mysql-test run? Like this:
./mtr --force --parallel 5
In fact, what I do is:
cmake -DCMAKE_BUILD_TYPE=Debug -DWITH_EMBEDDED_SERVER=ON make -j5 cd mysql-test-run script -c './mtr --force --parallel 5; ./mtr --force --parallel 5 --ps; ./mtr --force --parallel 3 --embed'
and that last command takes 4-5 hours on my laptop. I don't do that very often :) but it's throurough. You can run it overnight. Because of 'script' the complete log will be in the typescript file, so there's no need to monitor manually anything.
Regards, Sergei Chief Architect MariaDB and security@mariadb.org
Hi, Sachin! On Aug 02, Sachin Setia wrote:
Hello Sergei!
Sir I am stuck at one problem. Consider the case create table t1 (abc blob unique, xyz int unique); /* Insert data */ update t1 set abc = 33 where xyz =23; This is not working because xyz = 23 will require index scan and at the time of ha_update_row the inited != NONE and if inited != NONE then ha_index_read_idx_map would fail so how can I update row in case of blob unique ? In this case I think the only option left is sequential scan but that will defeat the purpose of blob unique.
inited!=NONE, I see. Right, when there's an ongoing table scan (with rnd_next) or index scan (with index_next) there's kind of a cursor inside the storage engine. And ha_index_read_idx_map will disrupt it, so you cannot do ha_index_read_idx_map during a table or index scan... This is a problem only for updates, not for inserts. I could see two solutions for that. A simpler and slower solution is not to allow that at all. Consider a test case (there's an index on idx): UPDATE t1 SET idx=idx+10 WHERE idx > 5; in this case MariaDB cannot scan the index idx and update all rows that it finds, because after idx=idx+10 the row move further in the same index and the index scan might see the same row again when it do index_next(), so it will be indefinitely adding 10 to it :) For cases like this (when the index that is used for searching is also updated), UPDATE has a special mode, it first performs the index search, collects all row ids, then uses this list of row ids to do the update. You can enable this mode when your unique columns are updated (take care not to enable it if unique hash columns are present, but not updated). Anoter, may be more complex, but faster solution is to use handler::clone method. It is used in selects to do two index scans in parallel - just what we need. It could be stored in a TABLE, like this: // early in mysql_update if (hash indexes are updated) { // prepare handler clone if (table->xxxxx == NULL) table->xxxxx= table->file->clone(table->s->normalized_path.str, table->mem_root); } later, you use table->xxxxx->ha_index_read_idx_map() instead of table->file->ha_index_read_idx_map(). But these both handlers use the same TABLE and the same TABLE::record[], so you still need to take care not to overwrite TABLE::record[], even if you use handler::clone. I would try to use handler::clone(), it doesn't look that complex actually. And if that wouldn't work, use the first approach. Btw, don't forget to ha_close the clone, when TABLE closes it's main handler. Regards, Sergei Chief Architect MariaDB and security@mariadb.org
participants (2)
-
Sachin Setia
-
Sergei Golubchik