I'm trying to find a definitive answer and can't, so I'm hoping someone might know.
I'm developing a C++ app using GCC 4.x on Linux (32-bit OS). This app needs to be able to read files > 2GB in size.
I would really like to use iostream stuff vs. FILE pointers, but I can't find if the large file #defines (_LARGEFILE_SOURCE, _LARGEFILE64_SOURCE, _FILE_OFFSET_BITS=64) have any effect on the iostream headers.
I'm compiling on a 32-bit system. Any pointers would be helpful.
If you are using GCC, you can take advantage of a GCC extension called __gnu_cxx::stdio_filebuf, which ties an IOStream to a standard C FILE descriptor.
You need to define the following two things:
For example:
}
This has already been decided for you when
libstdc++
was compiled, and normally depends on whether or not_GLIBCXX_USE_LFS
was defined inc++config.h
.If in doubt, pass your executable (or
libstdc++.so
, if linking against it dynamically) throughreadelf -r
(or throughstrings
) and see if your binary/libstdc++
linked againstfopen
/fseek
/etc. orfopen64
/fseek64
/etc.UPDATE
You don't have to worry about the 2GB limit as long as you don't need/attempt to
fseek
orftell
(you just read from or write to the stream.)