I'm doing something really simple: slurping an entire text file from disk into a std::string
. My current code basically does this:
std::ifstream f(filename);
return std::string(std::istreambuf_iterator<char>(f), std::istreambuf_iterator<char>());
It's very unlikely that this will ever have any kind of performance impact on the program, but I still got curious whether this is a slow way of doing it.
Is there a risk that the construction of the string will involve a lot of reallocations? Would it be better (that is, faster) to use seekg()
/tellg()
to calculate the size of the file and reserve()
that much space in the string before doing the reading?
I benchmarked your implementation(1), mine(2), and two others(3 and 4) that I found on stackoverflow.
Results (Average of 100 runs; timed using gettimeofday, file was 40 paragraphs of lorem ipsum):
- readFile1: 764
- readFile2: 104
- readFile3: 129
- readFile4: 402
The implementations:
string readFile1(const string &fileName)
{
ifstream f(fileName.c_str());
return string(std::istreambuf_iterator<char>(f),
std::istreambuf_iterator<char>());
}
string readFile2(const string &fileName)
{
ifstream ifs(fileName.c_str(), ios::in | ios::binary | ios::ate);
ifstream::pos_type fileSize = ifs.tellg();
ifs.seekg(0, ios::beg);
vector<char> bytes(fileSize);
ifs.read(&bytes[0], fileSize);
return string(&bytes[0], fileSize);
}
string readFile3(const string &fileName)
{
string data;
ifstream in(fileName.c_str());
getline(in, data, string::traits_type::to_char_type(
string::traits_type::eof()));
return data;
}
string readFile4(const std::string& filename)
{
ifstream file(filename.c_str(), ios::in | ios::binary | ios::ate);
string data;
data.reserve(file.tellg());
file.seekg(0, ios::beg);
data.append(istreambuf_iterator<char>(file.rdbuf()),
istreambuf_iterator<char>());
return data;
}
What happens to the performance if you try doing that? Instead of asking "which way is faster?" you can think "hey, I can measure this."
Set up a loop that reads a file of a given size 10000 times or something, and time it. Then do it with the reserve()
method and time that. Try it with a few different file sizes (from small to enormous) and see what you get.
To be honest I am not certain but from what I have read, it really depends on the iterators. In the case of iterators from file streams it probably has no built in method to measure the length of the file between the begin and the end interator.
If this is correct it will operate by something similar to doubling it's internal storage size every time it runs out of space. In this case for n characters in the file there will be Log[n,2] memory allocations, and memory deletions, and n*Log[n,2] individual character copies, on top of just copying the characters into the string.
As Greg pointed out though, you might as well test it. As he said try it for a variety of file sizes for both techniques. Additionally you can use the following to get some quantitative timings.
#include<time.h>
#include<iostream>
...
clock_t time1=0, time2=0, delta;
float seconds;
time1=clock();
//Put code to be timed here
time2=clock();
delta= time2-time1;
seconds =(((float)delta)/((float)CLOCKS_PER_SEC));
std::cout<<"The operation took: "<<seconds<<" seconds."<<std::endl;
...
this should do the trick for the timing.