developers
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- 8 participants
- 6811 discussions
[Maria-developers] Updated (by Guest): index_merge: fair choice between index_merge union and range access (24)
by worklog-noreply@askmonty.org 27 May '09
by worklog-noreply@askmonty.org 27 May '09
27 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: 9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 27 May 2009, 14:41)=-=-
Category updated.
--- /tmp/wklog.24.old.8414 2009-05-27 14:41:43.000000000 +0300
+++ /tmp/wklog.24.new.8414 2009-05-27 14:41:43.000000000 +0300
@@ -1 +1 @@
-Client-BackLog
+Server-RawIdeaBin
-=-=(Guest - Wed, 27 May 2009, 14:41)=-=-
Version updated.
--- /tmp/wklog.24.old.8414 2009-05-27 14:41:43.000000000 +0300
+++ /tmp/wklog.24.new.8414 2009-05-27 14:41:43.000000000 +0300
@@ -1 +1 @@
-Server-9.x
+9.x
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Version updated.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Guest - Tue, 26 May 2009, 13:27)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.305 2009-05-26 13:27:32.000000000 +0300
+++ /tmp/wklog.24.new.305 2009-05-26 13:27:32.000000000 +0300
@@ -1 +1,70 @@
+(Not a ready HLS but draft)
+<contents>
+Solution overview
+Limitations
+TODO
+
+</contents>
+
+Solution overview
+=================
+The idea is to delay discarding potential index_merge plans until the point
+where it is really necessary.
+
+This way, we won't have to do much changes in the range analyzer, but will be
+able to keep potential index_merge plan just enough so that it's possible to
+take it into consideration together with range access plans.
+
+Since there are no changes in the optimizer, the ability to consider both
+range and index_merge options will be limited to WHERE clauses of this form:
+
+ WHERE := range_cond(key1_1) AND
+ range_cond(key2_1) AND
+ other_cond AND
+ index_merge_OR_cond1(key3_1, key3_2, ...)
+ index_merge_OR_cond2(key4_1, key4_2, ...)
+
+where
+
+ index_merge_OR_cond{N} := (range_cond(keyN_1) OR
+ range_cond(keyN_2) OR ...)
+
+
+ range_cond(keyX) := condition that allows to construct range access of keyX
+ and doesn't allow to construct range/index_merge accesses
+ for any keys of the table in question.
+
+
+For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
+
+ SEL_TREE(
+ range(key1_1),
+ ...
+ range(key2_1),
+ SEL_IMERGE( (1)
+ SEL_TREE(key3_1})
+ SEL_TREE(key3_2})
+ ...
+ )
+ ...
+ )
+
+which can be used to make a cost-based choice between range and index_merge.
+
+Limitations
+-----------
+This will not be a full solution in a sense that the range analyzer will not
+be able to produce sel_tree (1) if the WHERE clause is specified in other form
+(e.g. brackets were opened).
+
+TODO
+----
+* is it a problem if there are keys that are referred to both from
+ index_merge and from range access?
+
+* How strict is the limitation on the form of the WHERE?
+
+* TODO: The optimizer didn't compare costs of index_merge and range before (ok
+ it did but that was done for accesses to different tables). Will there be any
+ possible gotchas here?
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge: fair choice between index_merge union and range access (24)
by worklog-noreply@askmonty.org 27 May '09
by worklog-noreply@askmonty.org 27 May '09
27 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: 9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 27 May 2009, 14:41)=-=-
Category updated.
--- /tmp/wklog.24.old.8414 2009-05-27 14:41:43.000000000 +0300
+++ /tmp/wklog.24.new.8414 2009-05-27 14:41:43.000000000 +0300
@@ -1 +1 @@
-Client-BackLog
+Server-RawIdeaBin
-=-=(Guest - Wed, 27 May 2009, 14:41)=-=-
Version updated.
--- /tmp/wklog.24.old.8414 2009-05-27 14:41:43.000000000 +0300
+++ /tmp/wklog.24.new.8414 2009-05-27 14:41:43.000000000 +0300
@@ -1 +1 @@
-Server-9.x
+9.x
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Version updated.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Guest - Tue, 26 May 2009, 13:27)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.305 2009-05-26 13:27:32.000000000 +0300
+++ /tmp/wklog.24.new.305 2009-05-26 13:27:32.000000000 +0300
@@ -1 +1,70 @@
+(Not a ready HLS but draft)
+<contents>
+Solution overview
+Limitations
+TODO
+
+</contents>
+
+Solution overview
+=================
+The idea is to delay discarding potential index_merge plans until the point
+where it is really necessary.
+
+This way, we won't have to do much changes in the range analyzer, but will be
+able to keep potential index_merge plan just enough so that it's possible to
+take it into consideration together with range access plans.
+
+Since there are no changes in the optimizer, the ability to consider both
+range and index_merge options will be limited to WHERE clauses of this form:
+
+ WHERE := range_cond(key1_1) AND
+ range_cond(key2_1) AND
+ other_cond AND
+ index_merge_OR_cond1(key3_1, key3_2, ...)
+ index_merge_OR_cond2(key4_1, key4_2, ...)
+
+where
+
+ index_merge_OR_cond{N} := (range_cond(keyN_1) OR
+ range_cond(keyN_2) OR ...)
+
+
+ range_cond(keyX) := condition that allows to construct range access of keyX
+ and doesn't allow to construct range/index_merge accesses
+ for any keys of the table in question.
+
+
+For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
+
+ SEL_TREE(
+ range(key1_1),
+ ...
+ range(key2_1),
+ SEL_IMERGE( (1)
+ SEL_TREE(key3_1})
+ SEL_TREE(key3_2})
+ ...
+ )
+ ...
+ )
+
+which can be used to make a cost-based choice between range and index_merge.
+
+Limitations
+-----------
+This will not be a full solution in a sense that the range analyzer will not
+be able to produce sel_tree (1) if the WHERE clause is specified in other form
+(e.g. brackets were opened).
+
+TODO
+----
+* is it a problem if there are keys that are referred to both from
+ index_merge and from range access?
+
+* How strict is the limitation on the form of the WHERE?
+
+* TODO: The optimizer didn't compare costs of index_merge and range before (ok
+ it did but that was done for accesses to different tables). Will there be any
+ possible gotchas here?
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge: fair choice between index_merge union and range access (24)
by worklog-noreply@askmonty.org 27 May '09
by worklog-noreply@askmonty.org 27 May '09
27 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Psergey
CATEGORY.......: Client-BackLog
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Version updated.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Guest - Tue, 26 May 2009, 13:27)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.305 2009-05-26 13:27:32.000000000 +0300
+++ /tmp/wklog.24.new.305 2009-05-26 13:27:32.000000000 +0300
@@ -1 +1,70 @@
+(Not a ready HLS but draft)
+<contents>
+Solution overview
+Limitations
+TODO
+
+</contents>
+
+Solution overview
+=================
+The idea is to delay discarding potential index_merge plans until the point
+where it is really necessary.
+
+This way, we won't have to do much changes in the range analyzer, but will be
+able to keep potential index_merge plan just enough so that it's possible to
+take it into consideration together with range access plans.
+
+Since there are no changes in the optimizer, the ability to consider both
+range and index_merge options will be limited to WHERE clauses of this form:
+
+ WHERE := range_cond(key1_1) AND
+ range_cond(key2_1) AND
+ other_cond AND
+ index_merge_OR_cond1(key3_1, key3_2, ...)
+ index_merge_OR_cond2(key4_1, key4_2, ...)
+
+where
+
+ index_merge_OR_cond{N} := (range_cond(keyN_1) OR
+ range_cond(keyN_2) OR ...)
+
+
+ range_cond(keyX) := condition that allows to construct range access of keyX
+ and doesn't allow to construct range/index_merge accesses
+ for any keys of the table in question.
+
+
+For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
+
+ SEL_TREE(
+ range(key1_1),
+ ...
+ range(key2_1),
+ SEL_IMERGE( (1)
+ SEL_TREE(key3_1})
+ SEL_TREE(key3_2})
+ ...
+ )
+ ...
+ )
+
+which can be used to make a cost-based choice between range and index_merge.
+
+Limitations
+-----------
+This will not be a full solution in a sense that the range analyzer will not
+be able to produce sel_tree (1) if the WHERE clause is specified in other form
+(e.g. brackets were opened).
+
+TODO
+----
+* is it a problem if there are keys that are referred to both from
+ index_merge and from range access?
+
+* How strict is the limitation on the form of the WHERE?
+
+* TODO: The optimizer didn't compare costs of index_merge and range before (ok
+ it did but that was done for accesses to different tables). Will there be any
+ possible gotchas here?
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge: fair choice between index_merge union and range access (24)
by worklog-noreply@askmonty.org 27 May '09
by worklog-noreply@askmonty.org 27 May '09
27 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Psergey
CATEGORY.......: Client-BackLog
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Version updated.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Guest - Tue, 26 May 2009, 13:27)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.305 2009-05-26 13:27:32.000000000 +0300
+++ /tmp/wklog.24.new.305 2009-05-26 13:27:32.000000000 +0300
@@ -1 +1,70 @@
+(Not a ready HLS but draft)
+<contents>
+Solution overview
+Limitations
+TODO
+
+</contents>
+
+Solution overview
+=================
+The idea is to delay discarding potential index_merge plans until the point
+where it is really necessary.
+
+This way, we won't have to do much changes in the range analyzer, but will be
+able to keep potential index_merge plan just enough so that it's possible to
+take it into consideration together with range access plans.
+
+Since there are no changes in the optimizer, the ability to consider both
+range and index_merge options will be limited to WHERE clauses of this form:
+
+ WHERE := range_cond(key1_1) AND
+ range_cond(key2_1) AND
+ other_cond AND
+ index_merge_OR_cond1(key3_1, key3_2, ...)
+ index_merge_OR_cond2(key4_1, key4_2, ...)
+
+where
+
+ index_merge_OR_cond{N} := (range_cond(keyN_1) OR
+ range_cond(keyN_2) OR ...)
+
+
+ range_cond(keyX) := condition that allows to construct range access of keyX
+ and doesn't allow to construct range/index_merge accesses
+ for any keys of the table in question.
+
+
+For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
+
+ SEL_TREE(
+ range(key1_1),
+ ...
+ range(key2_1),
+ SEL_IMERGE( (1)
+ SEL_TREE(key3_1})
+ SEL_TREE(key3_2})
+ ...
+ )
+ ...
+ )
+
+which can be used to make a cost-based choice between range and index_merge.
+
+Limitations
+-----------
+This will not be a full solution in a sense that the range analyzer will not
+be able to produce sel_tree (1) if the WHERE clause is specified in other form
+(e.g. brackets were opened).
+
+TODO
+----
+* is it a problem if there are keys that are referred to both from
+ index_merge and from range access?
+
+* How strict is the limitation on the form of the WHERE?
+
+* TODO: The optimizer didn't compare costs of index_merge and range before (ok
+ it did but that was done for accesses to different tables). Will there be any
+ possible gotchas here?
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Table elimination (17)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Table elimination
CREATION DATE..: Sun, 10 May 2009, 19:57
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 17 (http://askmonty.org/worklog/?tid=17)
VERSION........: Server-5.1
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Tue, 26 May 2009, 21:52)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.14120 2009-05-26 21:52:06.000000000 +0300
+++ /tmp/wklog.17.new.14120 2009-05-26 21:52:06.000000000 +0300
@@ -1,11 +1,14 @@
<contents>
1. Conditions for removal
+1.1 Quick check if there are candidates
2. Removal operation properties
3. Removal operation
4. User interface
-5. Todo, issues to resolve
-5.1 To resolve
-5.2 Resolved
+5. Tests and benchmarks
+6. Todo, issues to resolve
+6.1 To resolve
+6.2 Resolved
+
</contents>
It's not really about elimination of tables, it's about elimination of inner
@@ -29,6 +32,18 @@
GROUP BY and HAVING do not refer to the inner tables of the outer join
nest.
+1.1 Quick check if there are candidates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Before we start to enumerate join nests, here is a quick way to check if
+there *can be* something to be removed:
+
+ if ((tables used in select_list |
+ tables used in group/order by UNION |
+ tables used in where) != bitmap_of_all_tables)
+ {
+ attempt table elimination;
+ }
+
2. Removal operation properties
-------------------------------
* There is always one way to remove (no choice to remove either this or that)
@@ -56,22 +71,24 @@
-----------------
* We'll add an @@optimizer switch flag for table elimination. Tentative
name: 'table_elimination'.
+ (Note ^^ utility of the above questioned ^, as table elimination can never
+ be worse than no elimination. We're leaning towards not adding the flag)
-* With EXPLAIN, there are two options:
- - Show removed tables in a way similar to const tables, with some
- indication that they are removed.
- - Do not show them altogether.
-(the second one seems to be better? We're targeting a situation with VIEWs,
-where the user would not care about what tables were added into his query
-and then discarded from it?)
+* EXPLAIN will not show the removed tables at all. This will allow to check
+ if tables were removed, and also will behave nicely with anchor model and
+ VIEWs: stuff that user doesn't care about just won't be there.
+
+5. Tests and benchmarks
+-----------------------
+Should create a benchmark in sql-bench which checks if the dbms has table
+elimination.
+TODO elaborate
-5. Todo, issues to resolve
+6. Todo, issues to resolve
--------------------------
-5.1 To resolve
+6.1 To resolve
~~~~~~~~~~~~~~
-- See EXPLAIN question in section #4.
-
- Re-check how this works with equality propagation.
- Relationship with prepared statements.
@@ -87,7 +104,7 @@
that we'll meet outer joins which have N inner tables of which some are 1-row
MyISAM tables that do not have primary key.
-5.2 Resolved
+6.2 Resolved
~~~~~~~~~~~~
- outer->inner join conversion is not a problem for table elimination.
We make outer->inner conversions based on predicates in WHERE. If the WHERE
-=-=(Guest - Fri, 22 May 2009, 17:23)=-=-
High-Level Specification modified.
--- /tmp/wklog.17.old.30851 2009-05-22 17:23:38.000000000 +0300
+++ /tmp/wklog.17.new.30851 2009-05-22 17:23:38.000000000 +0300
@@ -6,7 +6,7 @@
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
-execution plan when it is unneccessary to include them. This can, of
+execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
@@ -22,30 +22,26 @@
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
-contain a NULL value.
+still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
-A
-contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
+A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
-what
-the result will look like is to actually touch both tables during
+what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
-as
-many rows as there are in tableA, since joining with tableB cannot
+as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
-unneccessary. We can remove the whole join operation from the execution
+unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
-in
-the case described above. Let us look at a more advanced query, where
+in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30176 2009-05-22 17:00:35.000000000 +0300
+++ /tmp/wklog.17.new.30176 2009-05-22 17:00:35.000000000 +0300
@@ -1 +1 @@
-Maria-2.0
+Server-5.1
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Category updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-Maria-Sprint
+Server-Sprint
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-2.0
+9.x
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-9.x
+Maria-2.0
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Category updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Server-RawIdeaBin
+Maria-Sprint
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Server-5.2
+2.0
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Status updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Guest - Thu, 21 May 2009, 22:54)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.28468 2009-05-21 22:54:28.000000000 +0300
+++ /tmp/wklog.17.new.28468 2009-05-21 22:54:28.000000000 +0300
@@ -77,7 +77,7 @@
- Relationship with prepared statements.
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
- to limit the applicablity by removing [MARK1] as that can change during
+ to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
------------------------------------------------------------
-=-=(View All Progress Notes, 14 total)=-=-
http://askmonty.org/worklog/index.pl?tid=17&nolimit=1
DESCRIPTION:
Eliminate not needed tables from SELECT queries..
This will speed up some views and automatically generated queries.
Example:
CREATE TABLE B (id int primary key);
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
In this case we can remove table B and the join from the query.
HIGH-LEVEL SPECIFICATION:
Here is an extended explanation of table elimination.
Table elimination is a feature found in some modern query optimizers, of
which Microsoft SQL Server 2005/2008 seems to have the most advanced
implementation. Oracle 11g has also been confirmed to use table
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (
select
max(sub.fromDate)
from
tableB sub
where
sub.id = A.id
);
In this example we have added another join condition, which ensures
that we only pick the matching row from tableB having the latest
fromDate. In this case tableB will contain duplicates of the column
B.id, so in order to ensure uniqueness the primary key has to contain
the fromDate column as well. In other words the primary key of tableB
is (B.id, B.fromDate).
Furthermore, since the subselect ensures that we only pick the latest
B.fromDate for a given B.id we know that at most one row will match
the join condition. We will again have the situation where joining
with tableB cannot affect the number of rows in the result set. Since
we do not select any columns from tableB, the whole join operation can
be eliminated from the execution plan.
SQL Server 2005/2008 will deploy table elimination in this situation as
well. We have not found a way to make Oracle 11g use it for this type of
query. Queries like these arise in two situations. Either when you have
denormalized model consisting of a fact table with several related
dimension tables, or when you have a highly normalized model where each
attribute is stored in its own table. The example with the subselect is
common whenever you store historized/versioned data.
LOW-LEVEL DESIGN:
<contents>
1. Conditions for removal
1.1 Quick check if there are candidates
2. Removal operation properties
3. Removal operation
4. User interface
5. Tests and benchmarks
6. Todo, issues to resolve
6.1 To resolve
6.2 Resolved
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
1. Conditions for removal
-------------------------
We can eliminate an inner side of outer join if:
1. For each record combination of outer tables, it will always produce
exactly one record.
2. There are no references to columns of the inner tables anywhere else in
the query.
#1 means that every table inside the outer join nest is:
- is a constant table:
= because it can be accessed via eq_ref(const) access, or
= it is a zero-rows or one-row MyISAM-like table [MARK1]
- has an eq_ref access method candidate.
#2 means that WHERE clause, ON clauses of embedding outer joins, ORDER BY,
GROUP BY and HAVING do not refer to the inner tables of the outer join
nest.
1.1 Quick check if there are candidates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we start to enumerate join nests, here is a quick way to check if
there *can be* something to be removed:
if ((tables used in select_list |
tables used in group/order by UNION |
tables used in where) != bitmap_of_all_tables)
{
attempt table elimination;
}
2. Removal operation properties
-------------------------------
* There is always one way to remove (no choice to remove either this or that)
* It is always better to remove as much tables as possible (at least within
our cost model).
Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
3. Removal operation
--------------------
* Remove the outer join nest's nested join structure (i.e. get the
outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
$OJ->embedding->nested_join. Update table_map's of all ancestor nested
joins). [MARK2]
* Move the tables and their JOIN_TABs to front like it is done with const
tables, with exception that if eliminated outer join nest was within
another outer join nest, that shouldn't prevent us from moving away the
eliminated tables.
* Update join->table_count and all-join-tables bitmap.
* That's it. Nothing else?
4. User interface
-----------------
* We'll add an @@optimizer switch flag for table elimination. Tentative
name: 'table_elimination'.
(Note ^^ utility of the above questioned ^, as table elimination can never
be worse than no elimination. We're leaning towards not adding the flag)
* EXPLAIN will not show the removed tables at all. This will allow to check
if tables were removed, and also will behave nicely with anchor model and
VIEWs: stuff that user doesn't care about just won't be there.
5. Tests and benchmarks
-----------------------
Should create a benchmark in sql-bench which checks if the dbms has table
elimination.
TODO elaborate
6. Todo, issues to resolve
--------------------------
6.1 To resolve
~~~~~~~~~~~~~~
- Re-check how this works with equality propagation.
- Relationship with prepared statements.
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
rework operation [MARK2] to be undoable.
I'm leaning towards doing the former. With anchor modeling, it is unlikely
that we'll meet outer joins which have N inner tables of which some are 1-row
MyISAM tables that do not have primary key.
6.2 Resolved
~~~~~~~~~~~~
- outer->inner join conversion is not a problem for table elimination.
We make outer->inner conversions based on predicates in WHERE. If the WHERE
referred to an inner table (requirement for OJ->IJ conversion) then table
elimination would not be applicable anyway.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Table elimination (17)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Table elimination
CREATION DATE..: Sun, 10 May 2009, 19:57
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 17 (http://askmonty.org/worklog/?tid=17)
VERSION........: Server-5.1
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Tue, 26 May 2009, 21:52)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.14120 2009-05-26 21:52:06.000000000 +0300
+++ /tmp/wklog.17.new.14120 2009-05-26 21:52:06.000000000 +0300
@@ -1,11 +1,14 @@
<contents>
1. Conditions for removal
+1.1 Quick check if there are candidates
2. Removal operation properties
3. Removal operation
4. User interface
-5. Todo, issues to resolve
-5.1 To resolve
-5.2 Resolved
+5. Tests and benchmarks
+6. Todo, issues to resolve
+6.1 To resolve
+6.2 Resolved
+
</contents>
It's not really about elimination of tables, it's about elimination of inner
@@ -29,6 +32,18 @@
GROUP BY and HAVING do not refer to the inner tables of the outer join
nest.
+1.1 Quick check if there are candidates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Before we start to enumerate join nests, here is a quick way to check if
+there *can be* something to be removed:
+
+ if ((tables used in select_list |
+ tables used in group/order by UNION |
+ tables used in where) != bitmap_of_all_tables)
+ {
+ attempt table elimination;
+ }
+
2. Removal operation properties
-------------------------------
* There is always one way to remove (no choice to remove either this or that)
@@ -56,22 +71,24 @@
-----------------
* We'll add an @@optimizer switch flag for table elimination. Tentative
name: 'table_elimination'.
+ (Note ^^ utility of the above questioned ^, as table elimination can never
+ be worse than no elimination. We're leaning towards not adding the flag)
-* With EXPLAIN, there are two options:
- - Show removed tables in a way similar to const tables, with some
- indication that they are removed.
- - Do not show them altogether.
-(the second one seems to be better? We're targeting a situation with VIEWs,
-where the user would not care about what tables were added into his query
-and then discarded from it?)
+* EXPLAIN will not show the removed tables at all. This will allow to check
+ if tables were removed, and also will behave nicely with anchor model and
+ VIEWs: stuff that user doesn't care about just won't be there.
+
+5. Tests and benchmarks
+-----------------------
+Should create a benchmark in sql-bench which checks if the dbms has table
+elimination.
+TODO elaborate
-5. Todo, issues to resolve
+6. Todo, issues to resolve
--------------------------
-5.1 To resolve
+6.1 To resolve
~~~~~~~~~~~~~~
-- See EXPLAIN question in section #4.
-
- Re-check how this works with equality propagation.
- Relationship with prepared statements.
@@ -87,7 +104,7 @@
that we'll meet outer joins which have N inner tables of which some are 1-row
MyISAM tables that do not have primary key.
-5.2 Resolved
+6.2 Resolved
~~~~~~~~~~~~
- outer->inner join conversion is not a problem for table elimination.
We make outer->inner conversions based on predicates in WHERE. If the WHERE
-=-=(Guest - Fri, 22 May 2009, 17:23)=-=-
High-Level Specification modified.
--- /tmp/wklog.17.old.30851 2009-05-22 17:23:38.000000000 +0300
+++ /tmp/wklog.17.new.30851 2009-05-22 17:23:38.000000000 +0300
@@ -6,7 +6,7 @@
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
-execution plan when it is unneccessary to include them. This can, of
+execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
@@ -22,30 +22,26 @@
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
-contain a NULL value.
+still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
-A
-contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
+A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
-what
-the result will look like is to actually touch both tables during
+what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
-as
-many rows as there are in tableA, since joining with tableB cannot
+as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
-unneccessary. We can remove the whole join operation from the execution
+unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
-in
-the case described above. Let us look at a more advanced query, where
+in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30176 2009-05-22 17:00:35.000000000 +0300
+++ /tmp/wklog.17.new.30176 2009-05-22 17:00:35.000000000 +0300
@@ -1 +1 @@
-Maria-2.0
+Server-5.1
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Category updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-Maria-Sprint
+Server-Sprint
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-2.0
+9.x
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30162 2009-05-22 17:00:28.000000000 +0300
+++ /tmp/wklog.17.new.30162 2009-05-22 17:00:28.000000000 +0300
@@ -1 +1 @@
-9.x
+Maria-2.0
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Category updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Server-RawIdeaBin
+Maria-Sprint
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Version updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Server-5.2
+2.0
-=-=(Guest - Fri, 22 May 2009, 17:00)=-=-
Status updated.
--- /tmp/wklog.17.old.30122 2009-05-22 17:00:02.000000000 +0300
+++ /tmp/wklog.17.new.30122 2009-05-22 17:00:02.000000000 +0300
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Guest - Thu, 21 May 2009, 22:54)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.28468 2009-05-21 22:54:28.000000000 +0300
+++ /tmp/wklog.17.new.28468 2009-05-21 22:54:28.000000000 +0300
@@ -77,7 +77,7 @@
- Relationship with prepared statements.
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
- to limit the applicablity by removing [MARK1] as that can change during
+ to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
------------------------------------------------------------
-=-=(View All Progress Notes, 14 total)=-=-
http://askmonty.org/worklog/index.pl?tid=17&nolimit=1
DESCRIPTION:
Eliminate not needed tables from SELECT queries..
This will speed up some views and automatically generated queries.
Example:
CREATE TABLE B (id int primary key);
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
In this case we can remove table B and the join from the query.
HIGH-LEVEL SPECIFICATION:
Here is an extended explanation of table elimination.
Table elimination is a feature found in some modern query optimizers, of
which Microsoft SQL Server 2005/2008 seems to have the most advanced
implementation. Oracle 11g has also been confirmed to use table
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (
select
max(sub.fromDate)
from
tableB sub
where
sub.id = A.id
);
In this example we have added another join condition, which ensures
that we only pick the matching row from tableB having the latest
fromDate. In this case tableB will contain duplicates of the column
B.id, so in order to ensure uniqueness the primary key has to contain
the fromDate column as well. In other words the primary key of tableB
is (B.id, B.fromDate).
Furthermore, since the subselect ensures that we only pick the latest
B.fromDate for a given B.id we know that at most one row will match
the join condition. We will again have the situation where joining
with tableB cannot affect the number of rows in the result set. Since
we do not select any columns from tableB, the whole join operation can
be eliminated from the execution plan.
SQL Server 2005/2008 will deploy table elimination in this situation as
well. We have not found a way to make Oracle 11g use it for this type of
query. Queries like these arise in two situations. Either when you have
denormalized model consisting of a fact table with several related
dimension tables, or when you have a highly normalized model where each
attribute is stored in its own table. The example with the subselect is
common whenever you store historized/versioned data.
LOW-LEVEL DESIGN:
<contents>
1. Conditions for removal
1.1 Quick check if there are candidates
2. Removal operation properties
3. Removal operation
4. User interface
5. Tests and benchmarks
6. Todo, issues to resolve
6.1 To resolve
6.2 Resolved
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
1. Conditions for removal
-------------------------
We can eliminate an inner side of outer join if:
1. For each record combination of outer tables, it will always produce
exactly one record.
2. There are no references to columns of the inner tables anywhere else in
the query.
#1 means that every table inside the outer join nest is:
- is a constant table:
= because it can be accessed via eq_ref(const) access, or
= it is a zero-rows or one-row MyISAM-like table [MARK1]
- has an eq_ref access method candidate.
#2 means that WHERE clause, ON clauses of embedding outer joins, ORDER BY,
GROUP BY and HAVING do not refer to the inner tables of the outer join
nest.
1.1 Quick check if there are candidates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we start to enumerate join nests, here is a quick way to check if
there *can be* something to be removed:
if ((tables used in select_list |
tables used in group/order by UNION |
tables used in where) != bitmap_of_all_tables)
{
attempt table elimination;
}
2. Removal operation properties
-------------------------------
* There is always one way to remove (no choice to remove either this or that)
* It is always better to remove as much tables as possible (at least within
our cost model).
Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
3. Removal operation
--------------------
* Remove the outer join nest's nested join structure (i.e. get the
outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
$OJ->embedding->nested_join. Update table_map's of all ancestor nested
joins). [MARK2]
* Move the tables and their JOIN_TABs to front like it is done with const
tables, with exception that if eliminated outer join nest was within
another outer join nest, that shouldn't prevent us from moving away the
eliminated tables.
* Update join->table_count and all-join-tables bitmap.
* That's it. Nothing else?
4. User interface
-----------------
* We'll add an @@optimizer switch flag for table elimination. Tentative
name: 'table_elimination'.
(Note ^^ utility of the above questioned ^, as table elimination can never
be worse than no elimination. We're leaning towards not adding the flag)
* EXPLAIN will not show the removed tables at all. This will allow to check
if tables were removed, and also will behave nicely with anchor model and
VIEWs: stuff that user doesn't care about just won't be there.
5. Tests and benchmarks
-----------------------
Should create a benchmark in sql-bench which checks if the dbms has table
elimination.
TODO elaborate
6. Todo, issues to resolve
--------------------------
6.1 To resolve
~~~~~~~~~~~~~~
- Re-check how this works with equality propagation.
- Relationship with prepared statements.
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
rework operation [MARK2] to be undoable.
I'm leaning towards doing the former. With anchor modeling, it is unlikely
that we'll meet outer joins which have N inner tables of which some are 1-row
MyISAM tables that do not have primary key.
6.2 Resolved
~~~~~~~~~~~~
- outer->inner join conversion is not a problem for table elimination.
We make outer->inner conversions based on predicates in WHERE. If the WHERE
referred to an inner table (requirement for OJ->IJ conversion) then table
elimination would not be applicable anyway.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Monty): Remove all warnings (25)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Remove all warnings
CREATION DATE..: Tue, 26 May 2009, 19:22
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-BackLog
TASK ID........: 25 (http://askmonty.org/worklog/?tid=25)
VERSION........: Server-5.1
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 20 (hours remain)
ORIG. ESTIMATE.: 20
PROGRESS NOTES:
DESCRIPTION:
Remove all compiler warnings from MariaDB:
- Remove all current warnings
- Add flag 'abort on compiler error
- Add flags to get more compiler errors
Drizzle has already done this, so we should be able to get hints from their code
for some of the fixes and the compiler warning flags
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge: non-ROR intersection (21)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: non-ROR intersection
CREATION DATE..: Thu, 21 May 2009, 21:32
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 21 (http://askmonty.org/worklog/?tid=21)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Tue, 26 May 2009, 14:04)=-=-
High-Level Specification modified.
--- /tmp/wklog.21.old.1802 2009-05-26 14:04:57.000000000 +0300
+++ /tmp/wklog.21.new.1802 2009-05-26 14:04:57.000000000 +0300
@@ -1,4 +1,3 @@
-
<contents>
1. Execution
1.1 Temptable
@@ -30,6 +29,8 @@
1.1 Temptable
-------------
+[ This is our strategy of choice at the moment]
+
Use a temporary heap-grow-out-to-myisam table with a primary key:
create table temp_table (
@@ -168,3 +169,8 @@
a subset of columns covered by all other indexes.
= (TODO any other rules?)
+- Correlation across selectivities. If there is a condition
+
+ "cond(key1) AND cond(key2) AND ... AND cond(keyN)",
+
+ can we consider satisfaction of AND-parts to be independent?
-=-=(Psergey - Thu, 21 May 2009, 21:33)=-=-
High-Level Specification modified.
--- /tmp/wklog.21.old.25705 2009-05-21 21:33:02.000000000 +0300
+++ /tmp/wklog.21.new.25705 2009-05-21 21:33:02.000000000 +0300
@@ -1 +1,170 @@
+<contents>
+1. Execution
+1.1 Temptable
+1.1.1 Improvement
+1.2 Produce/merge sorted streams
+1.3 Extend Unique class to handle intersection
+1.4 Strategies that do not seem to be useful
+1.4.1 Remove matches after having produced an ordered stream
+1.4.2 Sparse rowid bitmaps
+2. Optimization
+
+</contents>
+
+1. Execution
+============
+
+The primary task is to find means to compute an intersection of N unordered
+streams. Besides general memory/cpu cost of computation, we consider:
+
+- whether the produced rowid stream is ordered. If it is, it can be piped
+ into index_merge/intersect (as opposed to sort-intersect)
+
+- whether the strategy can take advantage of the fact that some input streams
+ are already rowid-ordered
+
+- startup cost (cost of producing the first output record)
+
+We see the following possible strategies:
+
+1.1 Temptable
+-------------
+Use a temporary heap-grow-out-to-myisam table with a primary key:
+
+create table temp_table (
+ rowid binary($rowid_size),
+ count n,
+ primary key(rowid);
+);
+
+Then use this algorithm:
+
+ i1= {index with the least E(#records)};
+
+ for each record R in range_scan(i1)
+ temp_table.insert(R.rowid, count=1);
+
+ for each index idx except i1
+ {
+ for each R record in scan(idx) // (INNER-LOOP)
+ {
+ if (temp_table has R)
+ temptable[R].count++;
+ }
+ }
+
+ // The following loop can do ordered or unordered scan
+ // if we want it to be ordered scan, we probably better arrange so that
+ // 'count' column is part of the index.
+ for each record R in temp_table
+ {
+ if (R.count == number_of_streams)
+ emit(R.rowid);
+ }
+
+The algorithm has an option to emit an ordered rowid stream.
+
+In the above form, the cost to produce the first record is high. It's easy to
+adjust the algorithm to make it low - we'll need to just start scanning all
+indexes at once, and finish as soon as we got a full match, i.e. the
+
+ temptable[R].count++
+
+operation resulted in the counter being equal to the number of merged scans.
+
+1.1.1 Improvement
+~~~~~~~~~~~~~~~~~
+When running INNER-LOOP, we could count how many times we've done the
+"count++" operation. If it has been done #records-in-temptable times, that
+means that all further records will not have matches and we can finish the
+scan, i.e. break out of the INNER-LOOP.
+
+1.2 Produce/merge sorted streams
+--------------------------------
+For each of the merged scan, use filesort-like action to end up with an
+ordered stream of rowids. Then merge the ordered streams.
+
+By filesort-like action we mean
+ - Run over index, collect rowids in a buffer.
+ - When the buffer is full, sort it and dump into a temporary file.
+After the above we'll end up with a number of sorted buffers on disk. We can
+use mergebuff() function (it is part of filesort's functions) to produce one
+ordered sequence (i.e. array, which may be partially on disk) of rowids.
+
+Merging of ordered streams with help of priority queue is already implemented
+in QUICK_ROR_INTERSECT_SELECT. We'll need to substitute the
+
+ child_quick->get_next()
+
+call with a call to read rowid from an ordered sequence.
+
+1.3 Extend Unique class to handle intersection
+----------------------------------------------
+There is no point to use Unique object as a device that accumulates rowids of
+a single scan then produces them in sorted order. One could do the same faster
+with accumulating an array of rowids and then sorting it.
+
+It's possible to use Unique object to collect/merge data from all scans though.
+The idea is as follows:
+
+- Unique should store <rowid, n_scans> pairs
+- Duplicates are pairs with the same rowid
+- Unique should try to avoid creating duplicates:
+ - don't add a duplicate into the in-memory part, instead combine two elements
+ together by adding their n_scans elements.
+ - combine duplicates when it sees them in Unique.get() call
+- The data we get from Unique.get() should be filtered, all records that have
+ n_scans != number_of_scans_being_merged should be discarded.
+
+If we're lucky to have started and finished a scan on some index (denote it
+as S) without flushing the Unique in the process, then:
+- there is no point in adding any new records into the Unique because their
+ absence in the Unique means that they don't have match in S and hence will
+ not get into the result of intersection.
+- we need to only update the counters to be able to tell if the elements that
+ are already in the Unique will have matches in all scans.
+
+1.4 Strategies that do not seem to be useful
+--------------------------------------------
+
+keeping them here so we don't consider them over and over
+
+1.4.1 Remove matches after having produced an ordered stream
+------------------------------------------------------------
+We can dump everything into a rowid stream and get it sorted. Then we read it,
+and if we see a rowid repeated $n_merged_scans times, it belongs to the
+intersection (pass to output), otherwise it doesn't (skip).
+This doesn't have any advantages over the produce/merge sorted streams
+approach.
+
+1.4.2 Sparse rowid bitmaps
+--------------------------
+Use Falcon-style rowid bitmaps. The problem with that is that Falcon's
+bitmaps assume there will always be enough memory to accommodate them.
+
+PostgreSQL makes bitmaps "loose" when they exceed certain size by remembering
+disk pages, not ids of individual records. It's hard for us to do something
+similar because our rowids are opaque entities whose meaning depends on the
+storage engines.
+
+This seems to require too much change to be worth it.
+
+2. Optimization
+===============
+
+SEL_TREE objects already represent intersections. The problems with
+optimizations are:
+
+- Cost formula(s)
+- When N keys/conditions are present:
+
+ "cond(key1) AND cond(key2) AND ... AND cond(keyN)",
+
+ somehow avoid considering (2^n - n) possible options.
+
+- Avoid producing (or even considering) apparently suboptimal plans:
+ = Don't generate a merge of indexes (I_1, ... I_n) where columns of I_n are
+ a subset of columns covered by all other indexes.
+ = (TODO any other rules?)
+
DESCRIPTION:
At the moment index_merge supports intersection only for rowid-ordered streams.
This translates into a limitation that index_merge/intersect can only be
constructed for equality conditions (t.keypart1=const1 AND t.keypart2=const2
AND ... ) and the equalities should cover all index components.
This WL entry is to lift this limitation by developing algorithms that do
intersection on non-ROR scans.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Execution
1.1 Temptable
1.1.1 Improvement
1.2 Produce/merge sorted streams
1.3 Extend Unique class to handle intersection
1.4 Strategies that do not seem to be useful
1.4.1 Remove matches after having produced an ordered stream
1.4.2 Sparse rowid bitmaps
2. Optimization
</contents>
1. Execution
============
The primary task is to find means to compute an intersection of N unordered
streams. Besides general memory/cpu cost of computation, we consider:
- whether the produced rowid stream is ordered. If it is, it can be piped
into index_merge/intersect (as opposed to sort-intersect)
- whether the strategy can take advantage of the fact that some input streams
are already rowid-ordered
- startup cost (cost of producing the first output record)
We see the following possible strategies:
1.1 Temptable
-------------
[ This is our strategy of choice at the moment]
Use a temporary heap-grow-out-to-myisam table with a primary key:
create table temp_table (
rowid binary($rowid_size),
count n,
primary key(rowid);
);
Then use this algorithm:
i1= {index with the least E(#records)};
for each record R in range_scan(i1)
temp_table.insert(R.rowid, count=1);
for each index idx except i1
{
for each R record in scan(idx) // (INNER-LOOP)
{
if (temp_table has R)
temptable[R].count++;
}
}
// The following loop can do ordered or unordered scan
// if we want it to be ordered scan, we probably better arrange so that
// 'count' column is part of the index.
for each record R in temp_table
{
if (R.count == number_of_streams)
emit(R.rowid);
}
The algorithm has an option to emit an ordered rowid stream.
In the above form, the cost to produce the first record is high. It's easy to
adjust the algorithm to make it low - we'll need to just start scanning all
indexes at once, and finish as soon as we got a full match, i.e. the
temptable[R].count++
operation resulted in the counter being equal to the number of merged scans.
1.1.1 Improvement
~~~~~~~~~~~~~~~~~
When running INNER-LOOP, we could count how many times we've done the
"count++" operation. If it has been done #records-in-temptable times, that
means that all further records will not have matches and we can finish the
scan, i.e. break out of the INNER-LOOP.
1.2 Produce/merge sorted streams
--------------------------------
For each of the merged scan, use filesort-like action to end up with an
ordered stream of rowids. Then merge the ordered streams.
By filesort-like action we mean
- Run over index, collect rowids in a buffer.
- When the buffer is full, sort it and dump into a temporary file.
After the above we'll end up with a number of sorted buffers on disk. We can
use mergebuff() function (it is part of filesort's functions) to produce one
ordered sequence (i.e. array, which may be partially on disk) of rowids.
Merging of ordered streams with help of priority queue is already implemented
in QUICK_ROR_INTERSECT_SELECT. We'll need to substitute the
child_quick->get_next()
call with a call to read rowid from an ordered sequence.
1.3 Extend Unique class to handle intersection
----------------------------------------------
There is no point to use Unique object as a device that accumulates rowids of
a single scan then produces them in sorted order. One could do the same faster
with accumulating an array of rowids and then sorting it.
It's possible to use Unique object to collect/merge data from all scans though.
The idea is as follows:
- Unique should store <rowid, n_scans> pairs
- Duplicates are pairs with the same rowid
- Unique should try to avoid creating duplicates:
- don't add a duplicate into the in-memory part, instead combine two elements
together by adding their n_scans elements.
- combine duplicates when it sees them in Unique.get() call
- The data we get from Unique.get() should be filtered, all records that have
n_scans != number_of_scans_being_merged should be discarded.
If we're lucky to have started and finished a scan on some index (denote it
as S) without flushing the Unique in the process, then:
- there is no point in adding any new records into the Unique because their
absence in the Unique means that they don't have match in S and hence will
not get into the result of intersection.
- we need to only update the counters to be able to tell if the elements that
are already in the Unique will have matches in all scans.
1.4 Strategies that do not seem to be useful
--------------------------------------------
keeping them here so we don't consider them over and over
1.4.1 Remove matches after having produced an ordered stream
------------------------------------------------------------
We can dump everything into a rowid stream and get it sorted. Then we read it,
and if we see a rowid repeated $n_merged_scans times, it belongs to the
intersection (pass to output), otherwise it doesn't (skip).
This doesn't have any advantages over the produce/merge sorted streams
approach.
1.4.2 Sparse rowid bitmaps
--------------------------
Use Falcon-style rowid bitmaps. The problem with that is that Falcon's
bitmaps assume there will always be enough memory to accommodate them.
PostgreSQL makes bitmaps "loose" when they exceed certain size by remembering
disk pages, not ids of individual records. It's hard for us to do something
similar because our rowids are opaque entities whose meaning depends on the
storage engines.
This seems to require too much change to be worth it.
2. Optimization
===============
SEL_TREE objects already represent intersections. The problems with
optimizations are:
- Cost formula(s)
- When N keys/conditions are present:
"cond(key1) AND cond(key2) AND ... AND cond(keyN)",
somehow avoid considering (2^n - n) possible options.
- Avoid producing (or even considering) apparently suboptimal plans:
= Don't generate a merge of indexes (I_1, ... I_n) where columns of I_n are
a subset of columns covered by all other indexes.
= (TODO any other rules?)
- Correlation across selectivities. If there is a condition
"cond(key1) AND cond(key2) AND ... AND cond(keyN)",
can we consider satisfaction of AND-parts to be independent?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge: non-ROR intersection (21)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: non-ROR intersection
CREATION DATE..: Thu, 21 May 2009, 21:32
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 21 (http://askmonty.org/worklog/?tid=21)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Tue, 26 May 2009, 14:04)=-=-
High-Level Specification modified.
--- /tmp/wklog.21.old.1802 2009-05-26 14:04:57.000000000 +0300
+++ /tmp/wklog.21.new.1802 2009-05-26 14:04:57.000000000 +0300
@@ -1,4 +1,3 @@
-
<contents>
1. Execution
1.1 Temptable
@@ -30,6 +29,8 @@
1.1 Temptable
-------------
+[ This is our strategy of choice at the moment]
+
Use a temporary heap-grow-out-to-myisam table with a primary key:
create table temp_table (
@@ -168,3 +169,8 @@
a subset of columns covered by all other indexes.
= (TODO any other rules?)
+- Correlation across selectivities. If there is a condition
+
+ "cond(key1) AND cond(key2) AND ... AND cond(keyN)",
+
+ can we consider satisfaction of AND-parts to be independent?
-=-=(Psergey - Thu, 21 May 2009, 21:33)=-=-
High-Level Specification modified.
--- /tmp/wklog.21.old.25705 2009-05-21 21:33:02.000000000 +0300
+++ /tmp/wklog.21.new.25705 2009-05-21 21:33:02.000000000 +0300
@@ -1 +1,170 @@
+<contents>
+1. Execution
+1.1 Temptable
+1.1.1 Improvement
+1.2 Produce/merge sorted streams
+1.3 Extend Unique class to handle intersection
+1.4 Strategies that do not seem to be useful
+1.4.1 Remove matches after having produced an ordered stream
+1.4.2 Sparse rowid bitmaps
+2. Optimization
+
+</contents>
+
+1. Execution
+============
+
+The primary task is to find means to compute an intersection of N unordered
+streams. Besides general memory/cpu cost of computation, we consider:
+
+- whether the produced rowid stream is ordered. If it is, it can be piped
+ into index_merge/intersect (as opposed to sort-intersect)
+
+- whether the strategy can take advantage of the fact that some input streams
+ are already rowid-ordered
+
+- startup cost (cost of producing the first output record)
+
+We see the following possible strategies:
+
+1.1 Temptable
+-------------
+Use a temporary heap-grow-out-to-myisam table with a primary key:
+
+create table temp_table (
+ rowid binary($rowid_size),
+ count n,
+ primary key(rowid);
+);
+
+Then use this algorithm:
+
+ i1= {index with the least E(#records)};
+
+ for each record R in range_scan(i1)
+ temp_table.insert(R.rowid, count=1);
+
+ for each index idx except i1
+ {
+ for each R record in scan(idx) // (INNER-LOOP)
+ {
+ if (temp_table has R)
+ temptable[R].count++;
+ }
+ }
+
+ // The following loop can do ordered or unordered scan
+ // if we want it to be ordered scan, we probably better arrange so that
+ // 'count' column is part of the index.
+ for each record R in temp_table
+ {
+ if (R.count == number_of_streams)
+ emit(R.rowid);
+ }
+
+The algorithm has an option to emit an ordered rowid stream.
+
+In the above form, the cost to produce the first record is high. It's easy to
+adjust the algorithm to make it low - we'll need to just start scanning all
+indexes at once, and finish as soon as we got a full match, i.e. the
+
+ temptable[R].count++
+
+operation resulted in the counter being equal to the number of merged scans.
+
+1.1.1 Improvement
+~~~~~~~~~~~~~~~~~
+When running INNER-LOOP, we could count how many times we've done the
+"count++" operation. If it has been done #records-in-temptable times, that
+means that all further records will not have matches and we can finish the
+scan, i.e. break out of the INNER-LOOP.
+
+1.2 Produce/merge sorted streams
+--------------------------------
+For each of the merged scan, use filesort-like action to end up with an
+ordered stream of rowids. Then merge the ordered streams.
+
+By filesort-like action we mean
+ - Run over index, collect rowids in a buffer.
+ - When the buffer is full, sort it and dump into a temporary file.
+After the above we'll end up with a number of sorted buffers on disk. We can
+use mergebuff() function (it is part of filesort's functions) to produce one
+ordered sequence (i.e. array, which may be partially on disk) of rowids.
+
+Merging of ordered streams with help of priority queue is already implemented
+in QUICK_ROR_INTERSECT_SELECT. We'll need to substitute the
+
+ child_quick->get_next()
+
+call with a call to read rowid from an ordered sequence.
+
+1.3 Extend Unique class to handle intersection
+----------------------------------------------
+There is no point to use Unique object as a device that accumulates rowids of
+a single scan then produces them in sorted order. One could do the same faster
+with accumulating an array of rowids and then sorting it.
+
+It's possible to use Unique object to collect/merge data from all scans though.
+The idea is as follows:
+
+- Unique should store <rowid, n_scans> pairs
+- Duplicates are pairs with the same rowid
+- Unique should try to avoid creating duplicates:
+ - don't add a duplicate into the in-memory part, instead combine two elements
+ together by adding their n_scans elements.
+ - combine duplicates when it sees them in Unique.get() call
+- The data we get from Unique.get() should be filtered, all records that have
+ n_scans != number_of_scans_being_merged should be discarded.
+
+If we're lucky to have started and finished a scan on some index (denote it
+as S) without flushing the Unique in the process, then:
+- there is no point in adding any new records into the Unique because their
+ absence in the Unique means that they don't have match in S and hence will
+ not get into the result of intersection.
+- we need to only update the counters to be able to tell if the elements that
+ are already in the Unique will have matches in all scans.
+
+1.4 Strategies that do not seem to be useful
+--------------------------------------------
+
+keeping them here so we don't consider them over and over
+
+1.4.1 Remove matches after having produced an ordered stream
+------------------------------------------------------------
+We can dump everything into a rowid stream and get it sorted. Then we read it,
+and if we see a rowid repeated $n_merged_scans times, it belongs to the
+intersection (pass to output), otherwise it doesn't (skip).
+This doesn't have any advantages over the produce/merge sorted streams
+approach.
+
+1.4.2 Sparse rowid bitmaps
+--------------------------
+Use Falcon-style rowid bitmaps. The problem with that is that Falcon's
+bitmaps assume there will always be enough memory to accommodate them.
+
+PostgreSQL makes bitmaps "loose" when they exceed certain size by remembering
+disk pages, not ids of individual records. It's hard for us to do something
+similar because our rowids are opaque entities whose meaning depends on the
+storage engines.
+
+This seems to require too much change to be worth it.
+
+2. Optimization
+===============
+
+SEL_TREE objects already represent intersections. The problems with
+optimizations are:
+
+- Cost formula(s)
+- When N keys/conditions are present:
+
+ "cond(key1) AND cond(key2) AND ... AND cond(keyN)",
+
+ somehow avoid considering (2^n - n) possible options.
+
+- Avoid producing (or even considering) apparently suboptimal plans:
+ = Don't generate a merge of indexes (I_1, ... I_n) where columns of I_n are
+ a subset of columns covered by all other indexes.
+ = (TODO any other rules?)
+
DESCRIPTION:
At the moment index_merge supports intersection only for rowid-ordered streams.
This translates into a limitation that index_merge/intersect can only be
constructed for equality conditions (t.keypart1=const1 AND t.keypart2=const2
AND ... ) and the equalities should cover all index components.
This WL entry is to lift this limitation by developing algorithms that do
intersection on non-ROR scans.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Execution
1.1 Temptable
1.1.1 Improvement
1.2 Produce/merge sorted streams
1.3 Extend Unique class to handle intersection
1.4 Strategies that do not seem to be useful
1.4.1 Remove matches after having produced an ordered stream
1.4.2 Sparse rowid bitmaps
2. Optimization
</contents>
1. Execution
============
The primary task is to find means to compute an intersection of N unordered
streams. Besides general memory/cpu cost of computation, we consider:
- whether the produced rowid stream is ordered. If it is, it can be piped
into index_merge/intersect (as opposed to sort-intersect)
- whether the strategy can take advantage of the fact that some input streams
are already rowid-ordered
- startup cost (cost of producing the first output record)
We see the following possible strategies:
1.1 Temptable
-------------
[ This is our strategy of choice at the moment]
Use a temporary heap-grow-out-to-myisam table with a primary key:
create table temp_table (
rowid binary($rowid_size),
count n,
primary key(rowid);
);
Then use this algorithm:
i1= {index with the least E(#records)};
for each record R in range_scan(i1)
temp_table.insert(R.rowid, count=1);
for each index idx except i1
{
for each R record in scan(idx) // (INNER-LOOP)
{
if (temp_table has R)
temptable[R].count++;
}
}
// The following loop can do ordered or unordered scan
// if we want it to be ordered scan, we probably better arrange so that
// 'count' column is part of the index.
for each record R in temp_table
{
if (R.count == number_of_streams)
emit(R.rowid);
}
The algorithm has an option to emit an ordered rowid stream.
In the above form, the cost to produce the first record is high. It's easy to
adjust the algorithm to make it low - we'll need to just start scanning all
indexes at once, and finish as soon as we got a full match, i.e. the
temptable[R].count++
operation resulted in the counter being equal to the number of merged scans.
1.1.1 Improvement
~~~~~~~~~~~~~~~~~
When running INNER-LOOP, we could count how many times we've done the
"count++" operation. If it has been done #records-in-temptable times, that
means that all further records will not have matches and we can finish the
scan, i.e. break out of the INNER-LOOP.
1.2 Produce/merge sorted streams
--------------------------------
For each of the merged scan, use filesort-like action to end up with an
ordered stream of rowids. Then merge the ordered streams.
By filesort-like action we mean
- Run over index, collect rowids in a buffer.
- When the buffer is full, sort it and dump into a temporary file.
After the above we'll end up with a number of sorted buffers on disk. We can
use mergebuff() function (it is part of filesort's functions) to produce one
ordered sequence (i.e. array, which may be partially on disk) of rowids.
Merging of ordered streams with help of priority queue is already implemented
in QUICK_ROR_INTERSECT_SELECT. We'll need to substitute the
child_quick->get_next()
call with a call to read rowid from an ordered sequence.
1.3 Extend Unique class to handle intersection
----------------------------------------------
There is no point to use Unique object as a device that accumulates rowids of
a single scan then produces them in sorted order. One could do the same faster
with accumulating an array of rowids and then sorting it.
It's possible to use Unique object to collect/merge data from all scans though.
The idea is as follows:
- Unique should store <rowid, n_scans> pairs
- Duplicates are pairs with the same rowid
- Unique should try to avoid creating duplicates:
- don't add a duplicate into the in-memory part, instead combine two elements
together by adding their n_scans elements.
- combine duplicates when it sees them in Unique.get() call
- The data we get from Unique.get() should be filtered, all records that have
n_scans != number_of_scans_being_merged should be discarded.
If we're lucky to have started and finished a scan on some index (denote it
as S) without flushing the Unique in the process, then:
- there is no point in adding any new records into the Unique because their
absence in the Unique means that they don't have match in S and hence will
not get into the result of intersection.
- we need to only update the counters to be able to tell if the elements that
are already in the Unique will have matches in all scans.
1.4 Strategies that do not seem to be useful
--------------------------------------------
keeping them here so we don't consider them over and over
1.4.1 Remove matches after having produced an ordered stream
------------------------------------------------------------
We can dump everything into a rowid stream and get it sorted. Then we read it,
and if we see a rowid repeated $n_merged_scans times, it belongs to the
intersection (pass to output), otherwise it doesn't (skip).
This doesn't have any advantages over the produce/merge sorted streams
approach.
1.4.2 Sparse rowid bitmaps
--------------------------
Use Falcon-style rowid bitmaps. The problem with that is that Falcon's
bitmaps assume there will always be enough memory to accommodate them.
PostgreSQL makes bitmaps "loose" when they exceed certain size by remembering
disk pages, not ids of individual records. It's hard for us to do something
similar because our rowids are opaque entities whose meaning depends on the
storage engines.
This seems to require too much change to be worth it.
2. Optimization
===============
SEL_TREE objects already represent intersections. The problems with
optimizations are:
- Cost formula(s)
- When N keys/conditions are present:
"cond(key1) AND cond(key2) AND ... AND cond(keyN)",
somehow avoid considering (2^n - n) possible options.
- Avoid producing (or even considering) apparently suboptimal plans:
= Don't generate a merge of indexes (I_1, ... I_n) where columns of I_n are
a subset of columns covered by all other indexes.
= (TODO any other rules?)
- Correlation across selectivities. If there is a condition
"cond(key1) AND cond(key2) AND ... AND cond(keyN)",
can we consider satisfaction of AND-parts to be independent?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): index_merge optimizer: dont discard index_merge union strategies when range is available (24)
by worklog-noreply@askmonty.org 26 May '09
by worklog-noreply@askmonty.org 26 May '09
26 May '09
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge optimizer: dont discard index_merge union strategies when
range is available
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Psergey
CATEGORY.......: Client-BackLog
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Tue, 26 May 2009, 13:27)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.305 2009-05-26 13:27:32.000000000 +0300
+++ /tmp/wklog.24.new.305 2009-05-26 13:27:32.000000000 +0300
@@ -1 +1,70 @@
+(Not a ready HLS but draft)
+<contents>
+Solution overview
+Limitations
+TODO
+
+</contents>
+
+Solution overview
+=================
+The idea is to delay discarding potential index_merge plans until the point
+where it is really necessary.
+
+This way, we won't have to do much changes in the range analyzer, but will be
+able to keep potential index_merge plan just enough so that it's possible to
+take it into consideration together with range access plans.
+
+Since there are no changes in the optimizer, the ability to consider both
+range and index_merge options will be limited to WHERE clauses of this form:
+
+ WHERE := range_cond(key1_1) AND
+ range_cond(key2_1) AND
+ other_cond AND
+ index_merge_OR_cond1(key3_1, key3_2, ...)
+ index_merge_OR_cond2(key4_1, key4_2, ...)
+
+where
+
+ index_merge_OR_cond{N} := (range_cond(keyN_1) OR
+ range_cond(keyN_2) OR ...)
+
+
+ range_cond(keyX) := condition that allows to construct range access of keyX
+ and doesn't allow to construct range/index_merge accesses
+ for any keys of the table in question.
+
+
+For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
+
+ SEL_TREE(
+ range(key1_1),
+ ...
+ range(key2_1),
+ SEL_IMERGE( (1)
+ SEL_TREE(key3_1})
+ SEL_TREE(key3_2})
+ ...
+ )
+ ...
+ )
+
+which can be used to make a cost-based choice between range and index_merge.
+
+Limitations
+-----------
+This will not be a full solution in a sense that the range analyzer will not
+be able to produce sel_tree (1) if the WHERE clause is specified in other form
+(e.g. brackets were opened).
+
+TODO
+----
+* is it a problem if there are keys that are referred to both from
+ index_merge and from range access?
+
+* How strict is the limitation on the form of the WHERE?
+
+* TODO: The optimizer didn't compare costs of index_merge and range before (ok
+ it did but that was done for accesses to different tables). Will there be any
+ possible gotchas here?
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0