
Re: Review of MDEV-19749 - MDL scalability regression after backup locks
by Michael Widenius 27 May '25
by Michael Widenius 27 May '25
27 May '25
Hi!
On Sun, May 25, 2025 at 5:10 PM Sergey Vojtovich <svoj(a)mariadb.org> wrote:
<cut>
>> If possible, I suggest we use --metadata-locks-instances=1 to disable
>> the fast lanes. (Assuming there is no speedup or other gain by using
>> 1 instance of fast lanes). If we do this, we should allow values
>> 1-256 for the variable.
>> I think the change makes sense as the user is defined lock instances,
>> not 'fastlines'. Having 0 lock instances can imply to users that there
>> are no locks, which is not really the case.
>
> This is confusing indeed. I thought it'd be good to keep its name inline
> with other variables like table_open_cache_instances and
> metadata_locks_hash_instances. However fast lanes are slightly different:
> metadata_locks_instances defines how many fast lane instances have to
> be created in addition to conventional instance. Whereas the other ones
> are truly multi-instance, all instances are equal.
>
> In theory, the value of 1 can be useful. Acquiring lock via single fast lane can
> be 5-10% faster than via MDL_lock::m_granted, because it doesn't have to
> maintain bitmap and prlock complexities. Value of 0 can be used to reduce
> memory footprint and disable this feature.
>
> I'm unsure what would be right. Rename it to something like
> metadata_locks_fast_lane_instances?
>
> Kept as is until there's consensus.
I would assume that when has a performance issue with MDL locks one would
always use a higher value than 1. The memory on saves from using 1 or 8 segments
is probably neglectable compared to the speedup.
An end user is usually not that interested in how things is done internally.
Saying that we can split the lock instances in 1 or more segments is
the easiest possible
explanation and should be good enough for most users. In this case using
--metadata-locks-instances=1 is easy for the user to understand (as it
is clear that we will
need at least one lock to protect the data structures). The kind of
lock used (fast-lane or something or something else)
should preferably be hidden.
Anyway, do what you think is best for the end user. We will know
something is wrong if a lot of users
will start using --metadata-locks-instances=1 just to ensure that the
structures are protected.
Note that using zero in other contest means thing are insecure and can
cause crashes or lost data.
Compare to --sync-binlog=0. This does not mean that some other code
will ensure that the server is crash-safe.
It means the the user makes a choice that he can afford crashes to get
performance.
Using --metadata-locks-instances=1 for MDL locks makes it clear that
there is always protection and using
1 cannot lead to crashes.
>> Some observarion while looking at the code:
>> - Most of the locks related to MDL_BACKUP_### are used with a single
>> MDL. Should we create a fast path for these where we do not have even
>> have to call fast_lane() or fast_lane_execption() as we already know
>> which type of lock we are using?
>
> It'd mean we'll have to pass a flag from
>
> MDL_context::acquire_lock(bool fast_lane) ->
> MDL_context::try_acquire_lock_impl(fast_lane) ->
> MDL_map::try_acquire_lock(fast_lane) ->
> MDL_lock::try_acquire_lock(fast_lane)
>
> Same for MDL_context::release_lock()? Or did you have some other idea how it
> should be implemented?
I was thinking about having a dedicated function for using backup locks.
Something like replacing:
MDL_REQUEST_INIT(&mdl_request, MDL_key::BACKUP, "", "", MDL_BACKUP_BLOCK_DDL,
MDL_EXPLICIT);
if (thd->mdl_context.acquire_lock(&mdl_request,
thd->variables.lock_wait_timeout))
DBUG_RETURN(1);
With
thd->mdl_context.acquire_backup_lock(&mdl_request,
MDL_BACKUP_BLOCK_DDL, MDL_EXPLICT)
And the code could take into account that the lock is a backup lock
and use faster versions of
find_ticket() (there is no key to check etc).
> I'd rather devirtualize fast_lane() and fast_lane_exception() if you're concerned
> about virtual function calls.
I just wanted to explore the idea that we could make backup locks even
faster by having
optimized functions for these.
Don't know if this makes a lot of sense, but still probably worth
thinking a bit about it.
>>
>> - In MDL_LOCK you have all variable declartions in the middle of
>> function declarations.
>> It would be helpful if these could be moved to be first in the
>> class, like we have in
>> MariaDB for almost all other classes.
>> This makes it easier to find the variable declaration as one now has
>> to search forward in the class in some cases and backward in others.
>
> This is exactly what I did. With my cleanup all MDL_lock methods go after
> MDL_lock variables. What goes before MDL_lock variables are private
> subclasses, declaring types of MDL_lock variables. And they cannot be
> moved after variables.
>
> It is possible to move subclasses before MDL_lock, but then they lose
> private MDL_lock member status. And the one reading this code will
> have to keep in mind that these subclasses may be used outside of
> MDL_lock, not good either.
I did not notice these where sub classes. There was so many sub classes
with a lot of code that it was hard to find where the variables where created.
Not a big deal.
>>
>> - It took me some time to understand the benefit of having a different
>> m_disable_count in each fast_lane instead of having a counter
>> in MDL_lock and avoid a loop over all lanes when enable/disabling
>> the lanes. It would be good to add a comment before m_disable_count
>> definition to explain this. Something like
>> /*
>> We have to have a distinct counter in each fast_lane to ensure that
>> enable/disable of a lane is done under the lane mutex, not under
>> a global mutex
>> */
>
> Not exactly. We won't avoid a loop over all lanes anyway, since we have
> to move tickets. It is not impossible to have it in MDL_lock, it is just that
> the code will be harder to follow if it is a member of MDL_lock. I also
> expressed my concern in a private discussion: on x86_64 we can save
> one cache line by moving m_disable_count to MDL_lock.
My point is that I think it would be very hard to have a counter in
MDL_lock the way that the enable/disable loops are written.
You use the fact that each fastline has it's own counter to
check if the fastline has already been enabled/disable by another
concurrent thread. You could not do if the counter was in MDL_lock
(without adding another variable for the same purpose).
> Added this comment:
> /**
> Counts number of granted/pending non-fast lane locks.
>
> Having it as a member of Fast_lane allows simpler and cleaner
> implementation. Otherwise it is a property of MDL_lock.
> */
<cut>
>>
>>
>> + /** MDL_lock::iterate helper. */
>> + bool iterate(mdl_iterator_callback callback, void *argument) const
>> + {
>> + mysql_mutex_lock(&m_mutex);
>> + DBUG_ASSERT(enabled() || m_list.empty());
>> + bool res= std::any_of(m_list.begin(), m_list.end(),
>> + [callback, argument](MDL_ticket &ticket) {
>> + return callback(&ticket, argument, true);
>> + });
>> + mysql_mutex_unlock(&m_mutex);
>> + return res;
>> + }
>>
>> Any chance you could code this without std:: ???
>>
>> It makes code much harder to read for C coders (like me). I also do
>> not want too see too much depdendence of std:: in MariaDB but instead
>> use our list primitives. This keeps the code more uniform and easier
>> to debug! It also ensures that all memory allocations are accounted for!
>>
>> I greatly prefer inline code that is easy to debug step by step!
>> (I would not be able to debug the above if there would ever be a problem with
>> the list!)
>>
>> However if you greatly prefer you can keep the std:: code, but then I cannot
>> help you with debugging this part of the code.
>
> It was transitioned from I_P_List to ilist by d7129565262 and that revision introduced
> use of std::any_of(). I just reuse what we have (notice that I didn't add any new
> members to MDL_ticket).
Yes, I know you did not touch this code. My comment was just a general
observation and I wanted you to be aware of it if you would ever
touch this part of the code.
> I you're suggesting to make it look like:
> for (ticket= m_list; ticket= ticket->next; ticket)
>
> I don't think we have routines that allow such list handling.
I_list should be good enough for most purposes.
> If you're suggesting old I_P_List solution, then it doesn't seem to be any better
> than std::any_of():
> MDL_lock::Ticket_iterator it(m_waiting);
> MDL_ticket *ticket;
> while ((ticket= it++))
It think it is easier to read and much easier to debug.
It is also what we have in almost all other parts of the server.
> It gives the false feeling that we iterate an array of pointers. There is a possibility
> that ilist can be iterated this way too.
It is still what we are using almost everywhere else.
With std:: one has to learn new things, not used elsewhere, and is
hard to understand and debug.
> So, how exactly do you want me to transform this? Also keep in mind that it is
> more than Fast_lanes, we'll have to update all uses of MDL_lock::m_granted and
> MDL_lock::m_waiting.
Wouldn't the code that you showed about work?
> FWIW: I also have to spend extra time for reading such code, in many cases I
> have to consult cppreference. But I feel like we'll have to spend time for learning,
> these things will spread eventually. We're on C++17 already.
MariaDB is not a C++ program, it is a C program that uses some C++
functionality.
I REALLY do not want MariaDB to become a full C++ program!
<cut>
>> void downgrade(MDL_ticket *ticket, Functor update_ticket_type)
>> {
>> + DBUG_ASSERT(!m_strategy->fast_lane(ticket->get_type()));
>> + DBUG_ASSERT(!ticket->m_fast_lane.load(std::memory_order_relaxed));
>> + DBUG_ASSERT(m_strategy->fast_lane_exception(ticket->get_type()) ||
>> + fast_lanes_all_disabled());
>>
>> Isn't fast_lines_all_disabled() always true here in the current implementation?
>> (At least the comment indicates this is the case)
<cut>
> It was supposed to be clear from the comments to upgrade()/downgrade().
> But apparently it is not, comment extended:
> In other words:
> 1. fast_lane() lock types are never upgraded
> 2. fast_lane_exception() lock types can only be upgraded to
> fast_lane_exception() lock types
> 3. non-fast_lane() and non-fast_lane_exception() lock types
> can only be upgraded to non-fast_lane() and
> non-fast_lane_exception() lock types
Looks much better. Thanks.
>>
>>
>>
>> template<typename Functor>
>> void upgrade(MDL_ticket *ticket, Functor update_ticket_type,
>> MDL_ticket *remove)
>>
>> + if (m_fast_lane && !m_strategy->fast_lane_exception(remove->get_type()))
>> + enable_fast_lanes();
>>
>> Please add an explanation under which conditions in the current codebase
>> the call to enable_fast_lanes() is happening.
<cut>
>> Answer from slack discussion:
>> "Both MDL_BACKUP_FTWRL1 and MDL_BACKUP_FTWRL2 disable fast lanes, and
>> they need to
>> re-enable them once they're gone."
>
> This is at the very core of this idea. Fast lane locks must not shoot their
> legs disabling fast lanes. This comment was supposed to make it clear:
>
> Heavyweight lock types represent ongoing backup: MDL_BACKUP_START,
> MDL_BACKUP_FLUSH, MDL_BACKUP_WAIT_FLUSH, MDL_BACKUP_WAIT_DDL,
> MDL_BACKUP_WAIT_COMMIT, MDL_BACKUP_FTWRL1, MDL_BACKUP_FTWRL2,
> MDL_BACKUP_BLOCK_DDL. These locks are always served by conventional
> MDL_lock data structures. Whenever such lock is requested, fast
> lanes are disabled and all tickets registered in fast lanes are
> moved to conventional MDL_lock data structures. Until such locks
> are released or aborted, lightweight lock requests are served by
> conventional MDL_lock data structures.
Add at least a note in the comment for upgrade() that they can refer
to the above
comment for more information.
>> in try_acquire_lock(():
>> {
>> + if (m_fast_lane)
>> + {
>> + DBUG_ASSERT(mdl_instances);
>> + if (m_strategy->fast_lane(ticket->get_type()))
>> + {
>> + uint lane= mdl_context->get_thd()->thread_id % mdl_instances;
>> + if (m_fast_lane[lane].add_ticket(ticket))
>> + return res;
>> + }
>> + else if (!m_strategy->fast_lane_exception(ticket->get_type()))
>> + need_disable_fast_lanes= true;
>> + }
>>
>> Under backup or FTWRL when fast lines are disabled, the above code
>> may use an extra mutex for every BACKUP lock. This may potentially
>> degrade performance notable when backup is running!
>> Is this the case?
>>
>> We may have to consider checking enabled() outside of the mutex in
>> add_ticket() or the caller to not get a notable degradation when
>> backup is running.
>>
>> If this is ONLY under the BACKUP_WAIT_COMMIT lock, then the current code is ok
>> as this lock is intended to be kept very short (less than a second).
>>
>> However, if we will start using MDL locks as soon as the backup starts
>> and as the backup can take hours, this is a problem.
>>
>> We need to do a benchmark where we run hammardb while one connection
>> is doing 'backup stage start' to understand the consequences.
>> I have asked Steve to do this.
>
> According to Steve/Rahul benchmark, we're back to pre-fast lanes performance
> with BACKUP STAGE START active. With a slight bonus of +4%, because I made
> conventional critical section smaller.
Good to hear that it is not worse than before (which could be the case
as we are now locking one extra mutex compared to before).
I still wonder what the effect of my patch that I suggested earlier
would do for the normal and the backup case:
bool add_ticket(MDL_ticket *ticket)
{
if (!enabled())
return 0;
DBUG_ASSERT(!ticket->m_fast_lane.load(std::memory_order_relaxed));
mysql_mutex_lock(&m_mutex);
bool res= enabled();
Do you think it is worth to test this?
>>
>>
>> In the try_acquire_lock() we also may call disable_fast_lanes() even if
>> fast_lines was disabled. It would good to add a comment why this is ok,
>> like that enable_fast_lines() will be called when the ticket is
>> released.
>
> There're comments around m_disable_count, Fast_lane::enable(),
> Fast_lane::disable():
> ...
> Counts number of granted/pending non-fast lane locks.
> ...
> Lane can be disabled multiple times.
> ...
> Fast lane can be used again once all disablers are gone.
Yes, it is comment in other places of the code. However good commented code
means that one should be able to read any line of code without having to hunt
for other comments.
Please consider adding a comment in try_acquire_lock to make this a
bit more clear.
<cut>
>> @@ -1992,6 +2369,9 @@ void MDL_lock::remove_ticket(LF_PINS *pins,
>> Ticket_list MDL_lock::*list,
>> reschedule_waiters();
>> }
>> mysql_prlock_unlock(&m_rwlock);
>> + if (m_fast_lane && !m_strategy->fast_lane(ticket->get_type()) &&
>> + !m_strategy->fast_lane_exception(ticket->get_type()))
>> + enable_fast_lanes();
>>
>> You have the the above exact code in two places. Moving this to an
>> inline function
>> and checking fast_lane and fast_lane_exception bits at the same time would make
>> a small improvement.
>
> No, the code is different. In upgrade() we don't have to check for
> fast_lane(ticket->get_type()).
The same test is in MDL_lock::Remove_ticket():
mysql_prlock_unlock(&m_rwlock);
if (m_fast_lane && !m_strategy->fast_lane(ticket->get_type()) &&
!m_strategy->fast_lane_exception(ticket->get_type())
and in add_cloned_ticket()
if (m_fast_lane && !m_strategy->fast_lane(ticket->get_type()) &&
!m_strategy->fast_lane_exception(ticket->get_type()))
disable_fast_lanes();
> But given so many complications that fast_lane_exception() brought to us,
> I'm now inclined to remove it and implement upgrade()/downgrade() for
> fast_lane() locks instead.
ok...
> I have updated https://github.com/MariaDB/server/pull/4048 with all agreed
> changes. The rest need further discussion.
>
> To fetch updated code use:
> git fetch refs/pull/4048/head
> now FETCH_HEAD must be pointing to f4f3e979c08
> then checkout as usual, like git checkout -b svoj-mdl FETCH_HEAD
Thanks!
Regards,
Monty
1
0

22 May '25
Hi!
Review of
commit 7919442a866491d78c342c809f080c3fb006d626 (HEAD -> svoj-mdl)
MDEV-19749 - MDL scalability regression after backup locks
+ --metadata-locks-instances=#
+ Number of fast lanes to create for metadata locks. Can be
+ used to improve DML scalability by eliminating
+ MDL_lock::rwlock load. Use 0 to disable MDL fast lanes.
+ Supported MDL namespaces: BACKUP
If possible, I suggest we use --metadata-locks-instances=1 to disable
the fast lanes. (Assuming there is no speedup or other gain by using
1 instance of fast lanes). If we do this, we should allow values
1-256 for the variable.
I think the change makes sense as the user is defined lock instances,
not 'fastlines'. Having 0 lock instances can imply to users that there
are no locks, which is not really the case.
Some observarion while looking at the code:
- Most of the locks related to MDL_BACKUP_### are used with a single
MDL. Should we create a fast path for these where we do not have even
have to call fast_lane() or fast_lane_execption() as we already know
which type of lock we are using?
- In MDL_LOCK you have all variable declartions in the middle of
function declarations.
It would be helpful if these could be moved to be first in the
class, like we have in
MariaDB for almost all other classes.
This makes it easier to find the variable declaration as one now has
to search forward in the class in some cases and backward in others.
- It took me some time to understand the benefit of having a different
m_disable_count in each fast_lane instead of having a counter
in MDL_lock and avoid a loop over all lanes when enable/disabling
the lanes. It would be good to add a comment before m_disable_count
definition to explain this. Something like
/*
We have to have a distinct counter in each fast_lane to ensure that
enable/disable of a lane is done under the lane mutex, not under
a global mutex
*/
+ bool add_ticket(MDL_ticket *ticket)
+ {
+ DBUG_ASSERT(!ticket->m_fast_lane.load(std::memory_order_relaxed));
+ mysql_mutex_lock(&m_mutex);
+ bool res= enabled();
+ DBUG_ASSERT(res || m_list.empty());
+ if (res)
+ {
+ m_list.push_back(*ticket);
+ ticket->m_fast_lane.store(this, std::memory_order_relaxed);
+ }
+ mysql_mutex_unlock(&m_mutex);
+ return res;
+ }
As you only check for enabled() within the mutex (and no other variables),
I think you can safely avoid the mutex for the common case:
bool add_ticket(MDL_ticket *ticket)
{
if (!enabled())
return 0;
DBUG_ASSERT(!ticket->m_fast_lane.load(std::memory_order_relaxed));
mysql_mutex_lock(&m_mutex);
bool res= enabled();
.... /* Original code */
}
This does not work in remote_ticket() as the condition is more complex.
I asume that remove_ticket() is also only use if we are fast lane?
If not, then we could do the following change:
+ bool remove_ticket(MDL_ticket *ticket)
+ {
+ mysql_mutex_lock(&m_mutex);
+ DBUG_ASSERT(enabled() || m_list.empty());
+ Fast_lane *lane= static_cast<Fast_lane *>(
+ ticket->m_fast_lane.load(std::memory_order_relaxed));
+ if (lane)
+ {
+ DBUG_ASSERT(enabled());
+ DBUG_ASSERT(lane == this);
+ m_list.remove(*ticket);
+ }
+ mysql_mutex_unlock(&m_mutex);
+ return lane != nullptr;
+ }
->
bool remove_ticket(MDL_ticket *ticket)
{
Fast_lane *lane= static_cast<Fast_lane *>(
ticket->m_fast_lane.load(std::memory_order_relaxed));
if (!lane)
return 0;
mysql_mutex_lock(&m_mutex);
.... /* Original code */
}
Conclusion of slack discussion of above:
- Callers of add_ticket() and remove_tickets() have checked that
fast_line are used and it is only during very rare circumstances
that the if (enabled()) and if (lane) are not true.
- However we should consdier adding likely() to the if's and add a
comment under which conditions the if() could not be true.
/**
Disables fast lane, moves tickets to MDL_lock::m_granted.
I like the idea how disable() converts 'fast line' tickets to
MDL. Great idea as it simplifies things a lot!
+ /** MDL_lock::iterate helper. */
+ bool iterate(mdl_iterator_callback callback, void *argument) const
+ {
+ mysql_mutex_lock(&m_mutex);
+ DBUG_ASSERT(enabled() || m_list.empty());
+ bool res= std::any_of(m_list.begin(), m_list.end(),
+ [callback, argument](MDL_ticket &ticket) {
+ return callback(&ticket, argument, true);
+ });
+ mysql_mutex_unlock(&m_mutex);
+ return res;
+ }
Any chance you could code this without std:: ???
It makes code much harder to read for C coders (like me). I also do
not want too see too much depdendence of std:: in MariaDB but instead
use our list primitives. This keeps the code more uniform and easier
to debug! It also ensures that all memory allocations are accounted for!
I greatly prefer inline code that is easy to debug step by step!
(I would not be able to debug the above if there would ever be a problem with
the list!)
However if you greatly prefer you can keep the std:: code, but then I cannot
help you with debugging this part of the code.
-------
@@ -678,10 +918,15 @@ class MDL_lock
{
DBUG_ASSERT(key_arg->mdl_namespace() == MDL_key::BACKUP);
mysql_prlock_init(key_MDL_lock_rwlock, &m_rwlock);
+ m_fast_lane=
+ mdl_instances ? new (std::nothrow) Fast_lane[mdl_instances] : nullptr;
}
If we use '1' as default for no fastlines, we have to change the above line.
void downgrade(MDL_ticket *ticket, Functor update_ticket_type)
{
+ DBUG_ASSERT(!m_strategy->fast_lane(ticket->get_type()));
+ DBUG_ASSERT(!ticket->m_fast_lane.load(std::memory_order_relaxed));
+ DBUG_ASSERT(m_strategy->fast_lane_exception(ticket->get_type()) ||
+ fast_lanes_all_disabled());
Isn't fast_lines_all_disabled() always true here in the current implementation?
(At least the comment indicates this is the case)
Looking at the upgrade() call, it looks like if we would remove a
ticket, which can
call enable_fast_lanes(), and we call downgrade again, then the
comment for downgrade
would not hold (it says that tickes for downgrade are never fast_lane).
If the above assumption is true, we should update the comment for
downgrade that the assumption about only using convention locks
assumes that one does not do downgrade-upgrade-downgrade of a lock
(which we do not do now).
template<typename Functor>
void upgrade(MDL_ticket *ticket, Functor update_ticket_type,
MDL_ticket *remove)
+ if (m_fast_lane && !m_strategy->fast_lane_exception(remove->get_type()))
+ enable_fast_lanes();
Please add an explanation under which conditions in the current codebase
the call to enable_fast_lanes() is happening.
This is needed as the move to fast_lines here is somewhat in conflict with
the function comment that says we are never using fastlane locks here.
I don't understand how we can enable fast_lanes for a ticket for which
we have !m_strategy->fast_lane(ticket->get_type()), which I assume
means that the ticket is not suitable for fast_lane.
I did some testing and found out that the enable_fast_lanes() code
happens when upgrading the global read lock from MDL_BACKUP_FTWRL1 to
MDL_BACKUP_FTWRL2.
As fast_lane(MDL_BACKUP_FTWRL2) and
fast_lane_execption(MDL_BACKUP_FTWRL2) are both false
why are we enabling fast_lanes for MDL_BACKUP_FTWRL2 ?
Looking at the try_acquire_lock() the rule looks like:
- If m_fast_lane is set
- If fast_lane(ticket_type) is true, use fast_lanes
else if !fast_lane_exception(ticket_type)
disable fast lanes
This does not explain the case why MDL_BACKUP_FTWRL2 would enable fast lanes.
Answer from slack discussion:
"Both MDL_BACKUP_FTWRL1 and MDL_BACKUP_FTWRL2 disable fast lanes, and
they need to
re-enable them once they're gone."
in try_acquire_lock(():
{
+ if (m_fast_lane)
+ {
+ DBUG_ASSERT(mdl_instances);
+ if (m_strategy->fast_lane(ticket->get_type()))
+ {
+ uint lane= mdl_context->get_thd()->thread_id % mdl_instances;
+ if (m_fast_lane[lane].add_ticket(ticket))
+ return res;
+ }
+ else if (!m_strategy->fast_lane_exception(ticket->get_type()))
+ need_disable_fast_lanes= true;
+ }
Under backup or FTWRL when fast lines are disabled, the above code
may use an extra mutex for every BACKUP lock. This may potentially
degrade performance notable when backup is running!
Is this the case?
We may have to consider checking enabled() outside of the mutex in
add_ticket() or the caller to not get a notable degradation when
backup is running.
If this is ONLY under the BACKUP_WAIT_COMMIT lock, then the current code is ok
as this lock is intended to be kept very short (less than a second).
However, if we will start using MDL locks as soon as the backup starts
and as the backup can take hours, this is a problem.
We need to do a benchmark where we run hammardb while one connection
is doing 'backup stage start' to understand the consequences.
I have asked Steve to do this.
In the try_acqure_code() we also may call disable_fast_lanes() even if
fast_lines was disabled. It would good to add a comment why this is ok,
like that enable_fast_lines() will be called when the ticket is
released.
else if (!m_strategy->fast_lane_exception(ticket->get_type()))
need_disable_fast_lanes= true;
...
if (likely(m_strategy))
Under what circumstances can m_strategy by 0 ?
Is it possible that when m_strategy is 0, that m_fast_lane is not 0 ?
If yes, then you will have a crash at the above if.
I think this is impossible as m_strategy is always set for backup locks.
In any case, please add a comment before 'if (likely(m_strategy))' when it could
be 0. If it can never be zero, then change the if to a DBUG_ASSERT()
Discussion on slack showed this is ok.
Please add comment before if
/*
m_strategy can be 0 if a concurrent thread is deleting the MDL lock.
In this case the caller will do a retry
*/
+ Lock requests that were served by fast lanes are redirected to fast
+ lanes. Another thread can be disabling fast lanes concurrently and
+ resetting tiket->m_fast_lane. To handle such scenario properly
tiket -> ticket
@@ -1992,6 +2369,9 @@ void MDL_lock::remove_ticket(LF_PINS *pins,
Ticket_list MDL_lock::*list,
reschedule_waiters();
}
mysql_prlock_unlock(&m_rwlock);
+ if (m_fast_lane && !m_strategy->fast_lane(ticket->get_type()) &&
+ !m_strategy->fast_lane_exception(ticket->get_type()))
+ enable_fast_lanes();
You have the the above exact code in two places. Moving this to an
inline function
and checking fast_lane and fast_lane_exception bits at the same time would make
a small improvement.
Regards,
Monty
1
0