I want to perform disk I/O operations for a program that takes too much RAM. I use matrices of doubles and think writing them to disk as bytes is the fastest way (I need to preserve the double precision).
How to do it with portability?
I found this code (here) but the author says it's not portable...
#include <iostream>
#include <fstream>
int main()
{
using namespace std;
ofstream ofs( "atest.txt", ios::binary );
if ( ofs ) {
double pi = 3.14;
ofs.write( reinterpret_cast<char*>( &pi ), sizeof pi );
// Close the file to unlock it
ofs.close();
// Use a new object so we don't have to worry
// about error states in the old object
ifstream ifs( "atest.txt", ios::binary );
double read;
if ( ifs ) {
ifs.read( reinterpret_cast<char*>( &read ), sizeof read );
cout << read << '\n';
}
}
return 0;
}
He may have been referring to binary representation in general, not just sizeof.
C - Serialization of the floating point numbers (floats, doubles)
The spec doesn't specify binary representation of floating point numbers at all. Most compilers follow IEEE, so a little unit testing should ensure the behavior you want if you know your target platforms.
I think that the portability issue only occurs when you write to a file and read from it on a different machine. However, since you said you want to read/write to a file because of ram limitations, I can only assume you would do read/write operations on one machine at a time. This should work.
Usually I access the bytes with union.
There are different definitions/levels of portability. If all you ever do is to write these on one machine and read it on the same one, the only portability you are concerned with is whether this code is well-defined. (It is.)
If you want to write portably across several different platforms, you need to write string values, rather than binary ones.
However, note that the code you have lacks proper error handling. It doesn't check whether the file could be opened and successfully written to.