I've got a SQL Server table with about 50,000 rows in it. I want to select about 5,000 of those rows at random. I've thought of a complicated way, creating a temp table with a "random number" column, copying my table into that, looping through the temp table and updating each row with RAND()
, and then selecting from that table where the random number column < 0.1. I'm looking for a simpler way to do it, in a single statement if possible.
This article suggest using the NEWID()
function. That looks promising, but I can't see how I could reliably select a certain percentage of rows.
Anybody ever do this before? Any ideas?
This is a combination of the initial seed idea and a checksum, which looks to me to give properly random results without the cost of NEWID():
newid()/order by will work, but will be very expensive for large result sets because it has to generate an id for every row, and then sort them.
TABLESAMPLE() is good from a performance standpoint, but you will get clumping of results (all rows on a page will be returned).
For a better performing true random sample, the best way is to filter out rows randomly. I found the following code sample in the SQL Server Books Online article Limiting Results Sets by Using TABLESAMPLE:
When run against a table with 1,000,000 rows, here are my results:
If you can get away with using TABLESAMPLE, it will give you the best performance. Otherwise use the newid()/filter method. newid()/order by should be last resort if you have a large result set.
In response to the "pure trash" comment concerning large tables: you could do it like this to improve performance.
The cost of this will be the key scan of values plus the join cost, which on a large table with a small percentage selection should be reasonable.
Depending on your needs,
TABLESAMPLE
will get you nearly as random and better performance. this is available on MS SQL server 2005 and later.TABLESAMPLE
will return data from random pages instead of random rows and therefore deos not even retrieve data that it will not return.On a very large table I tested
took more than 20 minutes.
took 2 minutes.
Performance will also improve on smaller samples in
TABLESAMPLE
whereas it will not withnewid()
.Please keep in mind that this is not as random as the
newid()
method but will give you a decent sampling.See the MSDN page.
This link have a interesting comparison between Orderby(NEWID()) and other methods for tables with 1, 7, and 13 millions of rows.
Often, when questions about how to select random rows are asked in discussion groups, the NEWID query is proposed; it is simple and works very well for small tables.
However, the NEWID query has a big drawback when you use it for large tables. The ORDER BY clause causes all of the rows in the table to be copied into the tempdb database, where they are sorted. This causes two problems:
What you need is a way to select rows randomly that will not use tempdb and will not get much slower as the table gets larger. Here is a new idea on how to do that:
The basic idea behind this query is that we want to generate a random number between 0 and 99 for each row in the table, and then choose all of those rows whose random number is less than the value of the specified percent. In this example, we want approximately 10 percent of the rows selected randomly; therefore, we choose all of the rows whose random number is less than 10.
Please read the full article in MSDN.
Didn't quite see this variation in the answers yet. I had an additional constraint where I needed, given an initial seed, to select the same set of rows each time.
For MS SQL:
Minimum example:
Normalized execution time: 1.00
NewId() example:
Normalized execution time: 1.02
NewId()
is insignificantly slower thanrand(checksum(*))
, so you may not want to use it against large record sets.Selection with Initial Seed:
If you need to select the same set given a seed, this seems to work.