I need to randomly select values from one table, e.g. tableA.a_id
which is a VARCHAR2
, and use the value to insert into another table. For instance, assuming three columns needs to be inserted into 100 rows of tableX
(a sequence number, a random number between 100 and 999, and values of tableA.a_id
):
insert into tableX
select
rownum,
dbms_random.value(100,999), 0),
(select a_id from
(
SELECT a_id
FROM tableA
ORDER BY dbms_random.value
)
where rownum = 1)
from
(select level from dual connect by level <= 100);
However, rather than picking a random row from tableA.a_id
for each row, it selects the same value for all the rows, e.g.:
1 129 A-ID-48
2 849 A-ID-48
3 367 A-ID-48
However, if I execute the subquery repeatedly, I get a new value each time (for obvious reason), e.g.:
select a_id from
(
SELECT a_id
FROM tableA
ORDER BY dbms_random.value
)
where rownum = 1;
Result would be after each execution:
A-ID-7
A-ID-48
A-ID-74
How do I alter the original query, or come up with a new one for that matter, that would insertion of random rows from tableA
's a_id
column for each insert row into the destination table? Desire outcome:
1 129 A-ID-7
2 849 A-ID-48
3 367 A-ID-74
Update 1
Based on mathguy answer, I updated the query for a single table selection:
insert into tableX
select
rownum,
round(dbms_random.value(100,999), 0),
a_id
from
(
select
round(dbms_random.value(1, (select count(*) from tableA)), 0) tableX_rand_num
from tableX
) x
join
(
select
a_id,
dbms_random.value() rnd,
rownum tableA_rownum
from tableA
order by rnd
) a
on x.tableX_rand_num = a.tableA_rownum
where rownum <= 100;
LIMITATION: The number of inserted rows using this method is not going to be independent of number records available in the parent table (tableX
). In other word, you can only insert as many as records as the total rows available in tableX
. e.g. if tableX
has 200 records, and you wish to insert 1000, the query above would only allow you to insert up to 200 rows.