I use booost serialization that way :
Header H(__Magic, SSP_T_REQUEST, 98, 72, 42, Date(), SSP_C_NONE);
Header Z;
std::cout << H << std::endl;
std::cout << std::endl;
char serial_str[4096];
std::memset(serial_str, 0, 4096);
boost::iostreams::basic_array_sink<char> inserter(serial_str, 4096);
boost::iostreams::stream<boost::iostreams::basic_array_sink<char> > s(inserter);
boost::archive::binary_oarchive oa(s);
oa & H;
s.flush();
std::cout << serial_str << std::endl;
boost::iostreams::basic_array_source<char> device(serial_str, 4096);
boost::iostreams::stream<boost::iostreams::basic_array_source<char> > s2(device);
boost::archive::binary_iarchive ia(s2);
ia >> Z;
std::cout << Z << std::endl;
And It works perfectly fine.
Nevertheless, I need to send those packet on a socket. My problem is, how do I know on the other side how many bytes I need to read ? The size of the serialized result is not constant and btw is bigger than sizeof of my struct.
How can I be sure that the data is complete on the other side ? I use circular buffer but with serialisation how to do ?
Thx all
In general it's impossible to predict. It depends (a lot) on the archive format. But with object tracking completely subgraphs might be elided, and with dynamic type information a lot of data could be added.
If you can afford scratch buffers for serialized data, you can serialize to a buffer first, and then send the size (now that you know it) before sending the payload.
There will be overhead for
- object tracking (serializing through pointers/references)
- dynamic polymorphism (serializing through (smart) pointer-to-base)
- versioning (unless you disable it for the types involved)
- archive header (unless disabled)
- code conversion (unless disabled)
Here are some answers that give you more information about these tweak points:
- Boost C++ Serialization overhead
- Boost Serialization Binary Archive giving incorrect output
- Boost Serialization of vector<char>
- Tune things (boost::archive::no_codecvt, boost::archive::no_header, disable tracking etc.)
If all your data is POD, it's easy to predict the size.
Out of the box
If you share IPC on the same machine, and you're already using circular buffers, consider putting the circular buffer into shared memory.
I have lots of answers (search for managed_shared_memory
or managed_mapped_file
) with examples of this.
A concrete example, focusing on a lock-free single-producer/single-consumer scenario is here: Shared-memory IPC synchronization (lock-free)
Even if you choose to/need to stream messages (e.g. over the network) you can still employ e.g. Managed External Buffers. Hereby you avoid the need to do any serialization even without requiring all data to be POD. (The trick is that internally, offset_ptr<>
is used instead of raw pointers, making all references relative).
Create your own streaming class and override xsputn
method.
class counter_streambuf : public std::streambuf {
public:
using std::streambuf::streambuf;
size_t size() const { return m_size; }
protected:
std::streamsize xsputn(const char_type* __s, std::streamsize __n) override
{ this->m_size += __n; return __n; }
private:
size_t m_size = 0;
};
Usage:
Header H(__Magic, SSP_T_REQUEST, 98, 72, 42, Date(), SSP_C_NONE);
counter_streambuf csb;
boost::archive::binary_oarchive oa(csb, boost::archive::no_header);
oa & H;
cout<<"Size: "<<csb.size();