This is a follow up based on this question about query optimization.
In order to make fast selection, as suggested, I tried to pre-compute some data at insertion time using a trigger.
Basically, I want to keep the number of occurrences of a given column's value into a given table.
The following schema is used to store the occurrences for each of the values:
CREATE TABLE valuecount (value text, count int)
CREATE INDEX countidx ON t (count DESC)
CREATE UNIQUE INDEX valueidx ON valuecount (value);
And here is the trigger I've been testing with:
CREATE TRIGGER counttrigger INSERT ON collection
FOR EACH ROW
BEGIN
INSERT OR REPLACE INTO valuecount VALUES
(NEW.value, coalesce((SELECT count + 1 FROM count WHERE valuecount.query LIKE
NEW.value), 1));
END;
A typical insertion on the collection
table contains a few hundreds thousands of rows, in a single INSERT INTO ... SELECT ...
statement (I'm merging data from a db to another).
The problem is that with this trigger, inserting 300K rows takes 4 hours, instead of 300 ms ...
Is there a way to achieve this without sacrificing performances?
Thanks in advance for your help,