Given an web app (Java, Spring, Hibernate and Sybase as DB) with several, say 5 different search screens, I want to count first if the search result based on the user's criteria will be exceeding a limit, say 1000 rows. Results that are huge, going past 1000 can happen even if user provides reasonable filters and criteria.
Is doing it this way recommended:
- select count(*) from table --clauses, etc here
- then if > 1000, don't do actual search, return and show limit error (tell user to refine search)
- else if < 1000, do the actual search and give back the resultset to user
Or is there a better solution to handle this?
If this is the way to go, my followup question would be, how can we avoid duplicating the sql query? Because I understand doing this, will require me to declare the same search sql except the select clause will only contain count(*).
UPDATES
Additionally, I want to avoid 2 things: 1. processing from executing the actual sql 2. loading/mapping of the domain objects by the ORM (Hibernate in this case) * both 1 & 2 are avoided when I detect that the count is > 1000.
Depending on how you are retrieving the rows from the row set, you could simply filter the results at that level.
ie
You may want to warn the user that there result set has been trimmed though ;)
I wouldn't run a
COUNT(*)
at all, just run the query with aLIMIT 1001
. It's likely you are generating the exact same result set (i.e., to do the COUNT, you have to generate the result set) in the count and the next hit will be from the cache, or at worst you'll have to recalculate. You're just doing the same work twiceWe followed the same procedure for our application as well. and Yes the only difference will be of placing count(1) instead of * in the SQL.
However you might need to understand that on occasions the Count Query is the one which takes more time then fetching a subset of results.