GSoC (week 4)

Hello everyone,

After working on create table operation,next thing I had to work on was insert operations.So I explored some of the functions like row_ins_scan_index_for_duplicates, btr_pcur_get_rec to get clear understanding about how to implement duplicate search on hash index.
There was a problem in hash function that I wrote .It would calculate same hash value for two different keys if the prefix length of blob key part was zero. Now it seems to be working after I checked it in debugger.I still have to modify it for data types like varchar etc.
I have added test cases for insert operations in myisam.
In MyIsam, I found one problem in update operation. When updating a row,if the key is conflicting then server crashes because some pointer goes invalid in compare_record. I haven't fixed this issue yet.

I also modified some functions in dict0load.cc to  adjust some members of dict_index_t for a new index type.The main problem is that index entry for hash based index cointains only two fields(hash value and row id) while dict_index_t  contains hash field and other user defined fields which are used to calculate hash value.Some of the operations like alter table( e.g. rename column) needs to get access to all fields while other functions like rec_get_offsets and row_build_index_entry_low needs to get access to only hash field and row id. I am still working on this to find efficient solution to this problem.

On 16 June 2016 at 23:29, Sergei Golubchik <vuvova@gmail.com> wrote:
Hi, Shubham!

What I wanted to say on IRC was:

here's what the comment of cmp_dtuple_rec_with_match_low() says:

  ...............   If rec has an externally stored field we do not
  compare it but return with value 0 if such a comparison should be
  made.

Note that blobs are externally stored fields in InnoDB, so, I think,
this means that you cannot use cmp_dtuple_rec() to compare blobs.

Regards,
Sergei
Chief Architect MariaDB
and security@mariadb.org