I want to write a query using Postgres and PostGIS. I'm also using Rails with rgeo
, rgeo-activerecord
and activerecord-postgis-adapter
, but the Rails stuff is rather unimportant.
The table structure:
measurement
- int id
- int anchor_id
- Point groundtruth
- data (not important for the query)
Example data:
id | anchor_id | groundtruth | data
-----------------------------------
1 | 1 | POINT(1 4) | ...
2 | 3 | POINT(1 4) | ...
3 | 2 | POINT(1 4) | ...
4 | 3 | POINT(1 4) | ...
-----------------------------------
5 | 2 | POINT(3 2) | ...
6 | 4 | POINT(3 2) | ...
-----------------------------------
7 | 1 | POINT(4 3) | ...
8 | 1 | POINT(4 3) | ...
9 | 1 | POINT(4 3) | ...
10 | 5 | POINT(4 3) | ...
11 | 3 | POINT(4 3) | ...
This table is some kind of manually created view
for faster lookups (with millions of rows). Else we'd have to join 8 tables and it would get even slower. But that's not part of the problem.
Simple version:
Parameters:
- Point
p
- int
d
What the query should do:
1. The query looks for all groundtruth
Points which have a distance < d
from Point p
SQL for that is pretty easy: WHERE st_distance(groundtruth, p) < d
2. Now we have a list of groundtruth
points with their anchor_id
s. As you can see in the table above, it is possible to have multiple identical groundtruth-anchor_id tuples. For example: anchor_id=3
and groundtruth=POINT(1 4)
.
3. Next I'd like to eliminate the identical tuples, by choosing one of them randomly(!). Why not simply take the first? Because the data
column is different.
Choosing a random row in SQL: SELECT ... ORDER BY RANDOM() LIMIT 1
My problem with all of this is: I can imagine a solution using SQL LOOP
s and lot's of subqueries, but there's for sure a solution using GROUP BY
or some other methods which will make it faster.
Full version:
Basically the same as above with one difference: The input parameters change:
- lot's of Points
p1
...p312456345
- still one
d
If the simple query is working, this could be done using a LOOP
in SQL. But maybe there is a better (and faster) solution, because the database is really huge!
Solution
WITH ps AS (SELECT unnest(p_array) AS p)
SELECT DISTINCT ON (anchor_id, groundtruth)
*
FROM measurement m, ps
WHERE EXISTS (
SELECT 1
FROM ps
WHERE st_distance(m.groundtruth, ps.p) < d
)
ORDER BY anchor_id, groundtruth, random();
Thanks to Erwin Brandstetter!
I now cracked it, but the query is pretty slow...
My test database contains 22000 rows and I gave it two input values and it takes about 700ms. At the end there can be hundreds of input values :-/
The result now looks like this:
NEW:
Actual result:
So at the moment I get:
anchor_id
corresponding thegroundtruth
part of the tupleid
s corresponding thegroundtruth
-anchor_id
relationRemember:
groundtruth
groundtruth
value can have multiple identicalanchor_id
sgroundtruth
-anchor_id
-tuple has a distinctid
So what's missing for completion?:
ps.p
anchor_id
in the array that appears more than once: keep a random one and delete all other. This also means to remove the correspondingid
from theid
-array for every deletedanchor_id
To eliminate duplicates, this might be the most efficient query in PostgreSQL:
More about this query style:
As mentioned in the comments this gives you an arbitrary pick. If you need random, somewhat more expensive:
The second part is harder to optimize.
EXISTS
semi-join will probably be the fastest choice. For a given tableps (p point)
:This can stop evaluating as soon as one
p
is close enough and it keeps the rest of the query simple.Be sure to back that up with a matching GiST index.
If you have an array as input, create a CTE with
unnest()
on the fly:Update according to comment
If you only need a single row as answer, you can simplify:
Faster with
ST_DWithin()
Probably more efficient with the function
ST_DWithin()
(and a matching GiST index!).To get one row (using a sub-select instead of a CTE here):
To get one row for every point
p
within distanced
:Adding
ORDER BY random()
will make this query more expensive. Withoutrandom()
, Postgres can just pick the first matching row from the GiST index. Else all possible matches have to be retrieved and ordered randomly.BTW,
LIMIT 1
insideEXISTS
is pointless. Read the manual at the link I provided or this related question.