I wrote a statement that takes almost an hour to run so I am asking help so I can get to do this faster. So here we go:
I am making an inner join of two tables :
I have many time intervals represented by intervals and i want to get measure datas from measures only within those intervals.
intervals
: has two columns, one is the starting time, the other the ending time of the interval (number of rows = 1295)
measures
: has two columns, one with the measure, the other with the time the measure has been made (number of rows = one million)
The result I want to get is a table with in the first column the measure, then the time the measure has been done, the begin/end time of the considered interval (it would be repeated for row with a time within the considered range)
Here is my code:
select measures.measure as measure, measures.time as time, intervals.entry_time as entry_time, intervals.exit_time as exit_time
from
intervals
inner join
measures
on intervals.entry_time<=measures.time and measures.time <=intervals.exit_time
order by time asc
Thanks
You're pretty much going to get most of the rows from both tables in this case, plus you've got a sort.
The question is, does the calling process really need all the rows, or just the first few? This would change how I'd go about optimising the query.
I'll assume your calling process wants ALL the rows. Since the join predicate is not on an equality, I'd say a MERGE JOIN may be the best approach to aim for. A merge join requires its data sources to be sorted, so if we can avoid a sort the query should run as fast as it possibly can (barring more interesting approaches such as specialised indexes or materialized views).
To avoid the SORT operations on
intervals
andmeasures
, you could add indexes on (measures.time
,measures.measure
) and (intervals.entry_time
,intervals.exit_time
). The database can use the index to avoid a sort, and it'll be faster because it doesn't have to visit any table blocks.Alternatively, if you only have an index on
measures.time
, the query may still run ok without adding another big index - it'll run slower though because it'll probably have to read many table blocks to get themeasures.measure
for the SELECT clause.To summarise: your query is running against the full set of MEASURES. It matches the time of each MEASURES record to an INTERVALS record. If the window of times spanned by INTERVALS is roughly similar to the window spanned by MEASURES then your query is also running against the full set of INTERVALS, otherwise it is running against a subset.
Why that matter is because it reduces your scope for tuning, as a full table scan is the likely to be the fastest way of getting all the rows. So, unless your real MEASURES or INTERVALS tables have a lot more columns than you give us, it is unlikely that any indexes will give much advantage.
The possible strategies are:
I'm not going to present test cases for all the permutations, because the results are pretty much as we would expect.
Here is the test data. As you can see I'm using slightly larger data sets. The INTERVALS window is bigger than the MEASURES windows but not by much. The intervals are 10000 seconds wide, and the measures are taken every 15 seconds.
NB In my test data I have presumed that INTERVAL records do not overlap. This has an important corrolary: a MEASURES record joins to only one INTERVAL.
Benchmark
Here is the benchmark with no indexes.
MEASURES tests
Now let's build a unique index on INTERVALS (ENTRY_TIME, EXIT_TIME) and try out the various indexing strategies for MEASURES. First up, an index MEASURES TIME column only.
Now, let us index MEASURES.TIME and MEASURE columns
Now with no index on MEASURES (but still an index on INTERVALS)
So what difference does parallel query make ?
MEASURES Conclusion
Not much difference in the elapsed time for the different indexes. I was slightly surprised that building an index on MEASURES (TS, MEASURE) resulted in a full table scan and a somewhat slower execution time. On the other hand, it is no surprise that running in parallel query is much faster. So if you have Enterprise Edition and you have the CPUs to spare, using PQ will definitely reduce the elapsed time, although it won't change the resource costs much (and actually does a lot more sorting).
INTERVALS tests
So what difference might the various indexes on INTERVALS make? In the following tests we will retain an index on MEASURES (TS). First of all we will drop the primary key on both INTERVALS columns and replace it with a constraint on INTERVALS (ENTRY_TIME) only.
Lastly with no index on INTERVALS at all
INTERVALS conclusion
The index on INTERVALS makes a slight difference. That is, indexing (ENTRY_TIME, EXIT_TIME) results in a faster execution. This is because it permist a fast full index scan rather than a full table scan. This would be more significant if the time window delineated by INTERVALS was considerably wider than that of MEASURES.
Overall Conclusions
Because we are doing full table queries none of the indexes substantially changed the execution time. So if you have Enterprise Edition and multiple CPUs Parallel Query will give you the best results. Otherwise the most best indexes would be INTERVALS(ENTRY_TIME, EXIT_TIME) and MEASURES(TS) .The Nested Loops solution is definitely faster than Parallel Query - see Edit 4 below.If you were running against a subset of MEASURES (say a week's worth) then the presence of indexes would have a bigger impact, It is likely that the two I recommended in the previous paragraph would remain the most effective,
Last observation: I ran this on a bog standard dual core laptop with an SGA of just 512M. Yet all of my queries took less than six minutes. If your query really takes an hour then your database has some serious problems. Although this long running time could be an artefact of overlapping INTERVALS, which could result in a cartesian product.
**Edit **
Originally I included the output from
But alas SO severely truncated my post. So I have rewritten it but without execution or stats. Those who wish to validate my findings will have to run the queries themselevs.
Edit 4 (previous edit's removed for reasons of space)
At the third attempt I have been able to reproduce teh performance improvement for Quassnoi's solution.
So Nested Loops are definitely the way to go.
Useful lessons from the exercise
Not knowing what database system and version, I'd say that (lack of) indexing and the join clause could be causing the problem.
For every record in the measure table, you can have multiple records in the interval table (
intervals.entry_time<=measures.time
), and for every record in the interval table, you can have multiple records in measure (measures.time <=intervals.exit_time
). the resulting one-to-many and many-to one relationships cause by the join means multiple table scans for each record. I doubt that Cartesian Product is the correct term, but it's pretty close.Indexing would definitely help, but it would help even more if you could find a better key to join the two tables. having the one-to-many relationships going in one direction only would definitely speed up the processing as it wouldn't have to scan each table/index twice for each record.
try a parallel query
You could also create a materialized view, perhaps with the parallel hint above. It may take a long time to create the MV but once created it can be queried repeatedly.
You can't really optimize your statement - it's pretty simple as it is.
What you could do is investigate if some indices would help you.
You're selecting on
intervals.entry_time, intervals.exit_time, measures.time
- are those columns indexed?This is quite a common problem.
Plain
B-Tree
indexes are not good for the queries like this:An index is good for searching the values within the given bounds, like this:
, but not for searching the bounds containing the given value, like this:
This article in my blog explains the problem in more detail:
(the nested sets model deals with the similar type of predicate).
You can make the index on
time
, this way theintervals
will be leading in the join, the ranged time will be used inside the nested loops. This will require sorting ontime
.You can create a spatial index on
intervals
(available inMySQL
usingMyISAM
storage) that would includestart
andend
in one geometry column. This way,measures
can lead in the join and no sorting will be needed.The spatial indexes, however, are more slow, so this will only be efficient if you have few measures but many intervals.
Since you have few intervals but many measures, just make sure you have an index on
measures.time
:Update:
Here's a sample script to test:
This query:
uses
NESTED LOOPS
and returns in1.7
seconds.This query:
uses
MERGE JOIN
and I had to stop it after5
minutes.Update 2:
You will most probably need to force the engine to use the correct table order in the join using a hint like this:
The
Oracle
's optimizer is not smart enough to see that the intervals do not intersect. That's why it will most probably usemeasures
as a leading table (which would be a wise decision should the intervals intersect).Update 3:
This query splits the time axis into the ranges and uses a
HASH JOIN
to join the measures and timestamps on the range values, with fine filtering later.See this article in my blog for more detailed explanations on how it works: