Random record in ActiveRecord

2019-01-02 19:55发布

I'm in need of getting a random record from a table via ActiveRecord. I've followed the example from Jamis Buck from 2006.

However, I've also come across another way via a Google search (can't attribute with a link due to new user restrictions):

 rand_id = rand(Model.count)
 rand_record = Model.first(:conditions => ["id >= ?", rand_id])

I'm curious how others on here have done it or if anyone knows what way would be more efficient.

22条回答
伤终究还是伤i
2楼-- · 2019-01-02 20:25

I try this of Sam's example on my App using rails 4.2.8 of Benchmark( I put 1..Category.count for random, because if the random takes a 0 it will produce an error(ActiveRecord::RecordNotFound: Couldn't find Category with 'id'=0)) and the mine was:

 def random1
2.4.1 :071?>   Category.find(rand(1..Category.count))
2.4.1 :072?>   end
 => :random1
2.4.1 :073 > def random2
2.4.1 :074?>    Category.offset(rand(1..Category.count))
2.4.1 :075?>   end
 => :random2
2.4.1 :076 > def random3
2.4.1 :077?>   Category.offset(rand(1..Category.count)).limit(rand(1..3))
2.4.1 :078?>   end
 => :random3
2.4.1 :079 > def random4
2.4.1 :080?>    Category.pluck(rand(1..Category.count))
2.4.1 :081?>
2.4.1 :082 >     end
 => :random4
2.4.1 :083 > n = 100
 => 100
2.4.1 :084 > Benchmark.bm(7) do |x|
2.4.1 :085 >     x.report("find") { n.times {|i| random1 } }
2.4.1 :086?>   x.report("offset") { n.times {|i| random2 } }
2.4.1 :087?>   x.report("offset_limit") { n.times {|i| random3 } }
2.4.1 :088?>   x.report("pluck") { n.times {|i| random4 } }
2.4.1 :089?>   end

                  user      system      total     real
find            0.070000   0.010000   0.080000 (0.118553)
offset          0.040000   0.010000   0.050000 (0.059276)
offset_limit    0.050000   0.000000   0.050000 (0.060849)
pluck           0.070000   0.020000   0.090000 (0.099065)
查看更多
时光乱了年华
3楼-- · 2019-01-02 20:26

In Rails 4 and 5, using Postgresql or SQLite, using RANDOM():

Model.order("RANDOM()").first

Presumably the same would work for MySQL with RAND()

Model.order("RAND()").first

This is about 2.5 times faster than the approach in the accepted answer.

Caveat: This is slow for large datasets with millions of records, so you might want to add a limit clause.

查看更多
荒废的爱情
4楼-- · 2019-01-02 20:26

If you're using PostgreSQL 9.5+, you can take advantage of TABLESAMPLE to select a random record.

The two default sampling methods (SYSTEM and BERNOULLI) require that you specify the number of rows to return as a percentage of the total number of rows in the table.

-- Fetch 10% of the rows in the customers table.
SELECT * FROM customers TABLESAMPLE BERNOULLI(10);

This requires knowing the amount of records in the table to select the appropriate percentage, which may not be easy to find quickly. Fortunately, there is the tsm_system_rows module that allows you to specify the number of rows to return directly.

CREATE EXTENSION tsm_system_rows;

-- Fetch a single row from the customers table.
SELECT * FROM customers TABLESAMPLE SYSTEM_ROWS(1);

To use this within ActiveRecord, first enable the extension within a migration:

class EnableTsmSystemRowsExtension < ActiveRecord::Migration[5.0]
  def change
    enable_extension "tsm_system_rows"
  end
end

Then modify the from clause of the query:

customer = Customer.from("customers TABLESAMPLE SYSTEM_ROWS(1)").first

I don't know if the SYSTEM_ROWS sampling method will be entirely random or if it just returns the first row from a random page.

Most of this information was taken from a 2ndQuadrant blog post written by Gulcin Yildirim.

查看更多
皆成旧梦
5楼-- · 2019-01-02 20:27

Strongly Recommend this gem for random records, which is specially designed for table with lots of data rows:

https://github.com/haopingfan/quick_random_records

All other answers perform badly with large database, except this gem:

  1. quick_random_records only cost 4.6ms totally.

enter image description here

  1. the User.order('RAND()').limit(10) cost 733.0ms.

enter image description here

  1. the accepted answer offset approach cost 245.4ms totally.

enter image description here

  1. the User.all.sample(10) approach cost 573.4ms.

enter image description here


Note: My table only has 120,000 users. The more records you have, the more enormous the difference of performance will be.

查看更多
步步皆殇っ
6楼-- · 2019-01-02 20:28

After seeing so many answers I decided to benchmark them all on my PostgreSQL(9.6.3) database. I use a smaller 100,000 table and got rid of the Model.order("RANDOM()").first since it was already two orders of magnitude slower.

Using a table with 2,500,000 entries with 10 columns the hands down winner was the pluck method being almost 8 times faster than the runner up(offset. I only ran this on a local server so that number might be inflated but its bigger enough that the pluck method is what I'll end up using. It's also worth noting that this might cause issues is you pluck more than 1 result at a time since each one of those will be unique aka less random.

Pluck wins running 100 time on my 25,000,000 row table Edit: actually this time includes the pluck in the loop if I take it out it it runs about as fast as simple iteration on the id. However; it does take up a fair amount of RAM.

RandomModel                 user     system      total        real
Model.find_by(id: i)       0.050000   0.010000   0.060000 (  0.059878)
Model.offset(rand(offset)) 0.030000   0.000000   0.030000 ( 55.282410)
Model.find(ids.sample)     6.450000   0.050000   6.500000 (  7.902458)

Here is the data running 2000 times on my 100,000 row table to rule out random

RandomModel       user     system      total        real
find_by:iterate  0.010000   0.000000   0.010000 (  0.006973)
offset           0.000000   0.000000   0.000000 (  0.132614)
"RANDOM()"       0.000000   0.000000   0.000000 ( 24.645371)
pluck            0.110000   0.020000   0.130000 (  0.175932)
查看更多
心情的温度
7楼-- · 2019-01-02 20:29

.order('RANDOM()').limit(limit) looks neat but is slow for large tables because it needs to fetch and sort all rows even if limit is 1 (internally in database but not in Rails). I'm not sure about MySQL but this happens in Postgres. More explanation in here and here.

One solution for large tables is .from("products TABLESAMPLE SYSTEM(0.5)") where 0.5 means 0.5%. However, I find this solution is still slow if you have WHERE conditions that filter out a lot of rows. I guess it's because TABLESAMPLE SYSTEM(0.5) fetch all rows before WHERE conditions apply.

Another solution for large tables (but not very random) is:

products_scope.limit(sample_size).sample(limit)

where sample_size can be 100 (but not too large otherwise it's slow and consumes a lot of memory), and limit can be 1. Note that although this is fast but it's not really random, it's random within sample_size records only.

PS: Benchmark results in answers above are not reliable (at least in Postgres) because some DB queries running at 2nd time can be significantly faster than running at 1st time, thanks to DB cache. And unfortunately there is no easy way to disable cache in Postgres to make these benchmarks reliable.

查看更多
登录 后发表回答