How can I quickly create a large file on a Linux (Red Hat Linux) system?
dd will do the job, but reading from /dev/zero
and writing to the drive can take a long time when you need a file several hundreds of GBs in size for testing... If you need to do that repeatedly, the time really adds up.
I don't care about the contents of the file, I just want it to be created quickly. How can this be done?
Using a sparse file won't work for this. I need the file to be allocated disk space.
will create a 10 M file instantaneously (M stands for 1024*1024 bytes, MB stands for 1000*1000 - same with K, KB, G, GB...)
EDIT: as many have pointed out, this will not physically allocate the file on your device. With this you could actually create an arbitrary large file, regardless of the available space on the device
So, when doing this, you will be deferring physical allocation until the file is accessed. If you're mapping this file to memory, you may not have the expected performance.
But this is still a useful command to know
Shameless plug: OTFFS provides a file system providing arbitrarily large (well, almost. Exabytes is the current limit) files of generated content. It is Linux-only, plain C, and in early alpha.
See https://github.com/s5k6/otffs.
One approach: if you can guarantee unrelated applications won't use the files in a conflicting manner, just create a pool of files of varying sizes in a specific directory, then create links to them when needed.
For example, have a pool of files called:
Then, if you have an application that needs a 1G file called /home/oracle/logfile, execute a "
ln /home/bigfiles/1024M-A /home/oracle/logfile
".If it's on a separate filesystem, you will have to use a symbolic link.
The A/B/etc files can be used to ensure there's no conflicting use between unrelated applications.
The link operation is about as fast as you can get.
You can use "yes" command also. The syntax is fairly simple:
Press "Ctrl + C" to stop this, else it will eat up all your space available.
To clean this file run:
will clean this file.
This is the fastest I could do (which is not fast) with the following constraints:
This is the gist of it... `
`
In our case this is for an embedded linux system and this works well enough, but would prefer something faster.
FYI the command "dd if=/dev/urandom of=outputfile bs=1024 count = XX" was so slow as to be unusable.
Examples where seek is the size of the file you want in bytes
From the dd manpage: