Correct me if I'm wrong.
There are three approaches to get the nearest homes, users have created in my website:
- To create a table with two columns(latitude, longitude) that both of them are float and say:
Here it is:
$latitude = 50;
$longitude = 60;
SELECT * FROM my_table
WHERE (latitude <= $latitude+10 AND latitude >= $latitude-10)
AND (longitude <= $longitude+10 AND longitude >= $longitude-10)
that 10 here means 1km for example.
In this approach we can also use harvesine formula.
To merge those columns(latitude, longitude) to one column named point as POINT type and again search each row one by one.
To categorize multiple points(the coordinates of homes users have created) as a category for one section of a country i.e. city and if a query comes with $latitude and $longitude to see the nearest homes, I will check in which category they are stored IN ORDER NOT TO search all rows but search only the section this query(coordinate) belongs to.
As I guess approach number 1 is slow because of the conditions for each row of table and again slow if I use harvesine formula.
If I use ST_Distance it seems again it's slow because again it just has lots of calculations.
But if I use approach number 3 it seems it is faster to check each section for an specific point user is than check all rows. I know how to set point for each home however I don't know how to create multiple home positions as a section maybe in another table.
BTW in new versions of MySQL and MariaDB Spatial Indexes are supported in InnoDB.
My questions:
Is approach number 1 really slow or other ST_* functions are the same as this approach to check all rows with those formulas mentioned there one by one? Which one is faster?
Does approach number 2 do something other than simple conditions to make it faster? I mean does it make any changes when using type of POINT instead of float and using ST_* functions instead of doing it myself? I want to know whether the algorithm is different.
If approach number 3 is the fastest in these three approaches, how can I categorize points in order not to search all rows in a table?
How can I use Spatial Indexes to make it as fast as possible?
If any other approaches exist and I didn't mention, could you please tell me how can I get the nearest homes just by having coordinates in MySQL/MariaDB in PHP/Laravel?
Thanks All
Which formula you use for the distance doesn't matter much. What matters much more is the number of rows which you have to read, process and sort. In best case you can use an index for a condition in the WHERE clause to limit the number of processed rows. You can try to categorize your locations - But it depends on the nature of your data, if that is going to work well. You would also need to find out which "category" to use. A more general solution would be to use a SPATIAL INDEX and the ST_Within() function.
Now let's run some tests..
In my DB (MySQL 5.7.18) I have the following table:
The data comes from Free World Cities Database and contains 3173958 (3.1M) rows.
Note that
geoPoint
is redundant and equal toPOINT(longitude, latitude)
.Concider the user is located somewhere in London
and you want to find the nearest location from the
cities
table.A "trivial" query would be
The result is
Execution time: ~ 4.970 sec
If you use the less complex function
ST_Distance()
, you get the same result with an execution time of ~ 4.580 sec - which is not so much difference.Note that you don't need to store a geo point in the table. You can as good use
(point(c.longitude, c.latitude)
instead ofc.geoPoint
. To my surprise it is even faster (~3.6 sec forST_Distance
and ~4.0 sec forST_Distance_Sphere
). It might be even faster if I didn't have ageoPoint
column at all. But that still doesn't matter much, since you don't want the user to wait so log for a respose, if you can do better.Now let's look how we can use the SPATIAL INDEX with
ST_Within()
.You need to define a polygon which will contain the nearest location. A simple way is to use ST_Buffer() which will generate a polygon with 32 points and is nearly a circle*.
The result is the same. The execution time is ~ 0.000 sec (that's what my client (HeidiSQL) says).
* Note that the
@radius
is notated in degrees and thus the polygon will be more like an ellipse rather than a circle. But in my tests I always got the same result as with the simple and slow solution. I would though investigate more edge cases, before I use it in my production code.Now you need to find the optimal radius for your application/data. If it's too small - you might get no results, or miss the nearest point. If it's too big - you might need to process too many rows.
Here some numbers for the given test case:
Bounding Box and Haversine
In your brief
SELECT
, you are using the "bounding box" approach, wherein a crude square is drawn on the map. It, however, has a couple of flaws.cos()
is needed to fix this.Having these helps the bounding box, which filters the rows significantly, then the optional haversine test rounds the reach of the test.
This approach has "medium" performance -- One of the indexes will be used with the bounding box, thereby quickly limiting the candidates to an east-west (or north-south) stripe around the globe. But that may still be a lot of candidates.
By having filtered out most of the rows, the number of Haversine calls is not too bad; don't worry about the performance of the function.
If you have one million homes, the final bounding box that contains 5 homes (plus a few that fail the haversine check) will probably involve touching a few thousand rows -- due to using only one of the two indexes. This is still much better than fetching all million rows and check each one with the distance function.
POINT and SPATIAL index
Switching to
POINT
requires switching to aSPATIAL
index. In this mode,ST_Distance_Sphere()
is available instead of the haversine. (Caution: that function exists only in very recent versions.)By having filtered out most of the rows, the number of calls to
ST_Distance
orST_Distance_Sphere
is not too bad; don't worry about the performance of the function.SPATIAL
searches use R-Trees. I do not have a good feel for their performance in your query.Approach 3
By starting with another categorization of points, you add complexity. You also add the need to check adjacent regions to see if there are nearby points. I can't judge the relative performance without more details.
My Approach
I have some complex code that scales to arbitrarily many points. Since your dataset is probably small enough to be cached in RAM, it may be overkill for you. http://mysql.rjweb.org/doc.php/latlng
For only a million homes, the pair of indexes above might be "good enough" so that you don't need to resort to "my algorithm". My algorithm will touch only about 20 rows to get the desired 5 -- regardless of the total number of rows.
Other Notes
If you store both lat/lng and
POINT
, the table will be bulky; keep this in mind if trying to mix bounding boxes andST
functions.