I need to create big relatively big (1-8 GB) files. What is the fastest way to do so on Windows using C or C++ ? I need to create them on the fly and the speed is really an issue. File will be used for storage emulation i.e will be access randomly in different offsets and i need that all storage will be preallocate but not initialized, currently we are writing all storage with dummy data and it's taking too long.
Thanks.
I am aware that your question is tagged with Windows, and Brian R. Bondy gave you the best answer to your question if you know for certain you will not have to port your application to other platforms. However, if you might have to port your application to other platforms, you might want to do something more like what Adrian Cornish proposed as the answer for the question "How to create file of “x” size?" found at How to create file of "x" size?.
Of course, there is an added twist. The answer proposed by Adrian Cornish makes use of the fseek function which has the following signature.
The problem is that you want to create a very large file with a file size that is beyond the range of a 32-bit integer. You need to use the 64-bit equivalent of fseek. Unfortunately, on different platforms it has different names.
The header file LargeFileSupport.h found at http://mosaik-aligner.googlecode.com/svn-history/r2/trunk/src/CommonSource/Utilities/LargeFileSupport.h offers a solution to this problem.
This would allow you to write the following function.
I thought I would add this just in case the information would be of use to you.
Use the Win32 API, CreateFile, SetFilePointerEx, SetEndOfFile, and CloseHandle. In that same order.
The trick is in the SetFilePointerEx function. From MSDN:
Windows explorer actually does this same thing when copying a file from one location to another. It does this so that the disk does not need to re-allocate the file for a fragmented disk.
Use "fsutil" command:
E:\VirtualMachines>fsutil file createnew Usage : fsutil file createnew Eg : fsutil file createnew C:\testfile.txt 1000
Reagds
P.S. it is for Windows: 2000/XP/7
If you're using NTFS then sparse files are the way to go:
Well this solution is not bad, but the thing you are looking for is SetFileValidData
As MSDN sais:
So this always leave disk data as it is,
SetFilePointerEx
should set all data to zeros, so big allocation takes some time.Check out memory mapped files.
They very much match the use case you describe, high performance and random access.
I believe they don't need to be created as large files. You just set a large max size on them and they will be expanded when you write to parts you haven't touched before.