How to boost::serialize into a sqlite::blob?

2019-02-10 17:06发布

I am working on a scientific project which requires several program abilities. After looking around for available tools I decided to work with Boost library which provided me needed features that C++ standard library does not provide such as date/time management, etc.

My project is set of command line which process a bunch of data from a old, homemade, plain-text file-based database: import, conversion, analysis, reporting.

Now I reached the point where I do need persistence. So I included boost::serialization that I found really useful. I am able to store and restore 'medium' dataset (not-so-big but not-so-small), they are about (7000,48,15,10)-dataset.

I also use SQLite C API to store and manage command defaults, output settings and variables meta informations (units, scale, limits).

Something crossed my mind: serialize into blob field instead of separate files. There might be some drawback that I haven't seen yet (there always is) but I think it can be a good solution that will suits my needs.

I am able to text-serialize into a std::string so I can do it that way: there is no difficulties because it only uses normal characters. But I would like to binary-serialize into a blob.

How should I proceed in order to use standard stream when filling my INSERT query?

1条回答
倾城 Initia
2楼-- · 2019-02-10 18:07

Hah. I've never used sqlite3 C API before. And I've never written an output streambuf implementation. But seeing how I will probably be using sqlite3 in a c++ codebase in the future, I thought I'd spent some time with

So it turns out you can open a blob field for incremental IO. However, though you can read/write the BLOB, you can't change the size (except via a separate UPDATE statement).

So, the steps for my demonstration became:

  1. insert a record into a table, binding a "zero-blob" of a certain (fixed) size
  2. open the blob field in the newly inserted record
  3. wrap the blob handle in a custom blob_buf object that derives from std::basic_streambuf<> and can be used with std::ostream to write to that blob
  4. serialize some data into the ostream
  5. flush
  6. destruct/cleanup

It works :)

The code in main:

int main()
{
    sqlite3 *db = NULL;
    int rc = sqlite3_open_v2("test.sqlite3", &db, SQLITE_OPEN_READWRITE, NULL);
    if (rc != SQLITE_OK) {
        std::cerr << "database open failed: " << sqlite3_errmsg(db) << "\n";
        exit(255);
    }

    // 1. insert a record into a table, binding a "zero-blob" of a certain (fixed) size
    sqlite3_int64 inserted = InsertRecord(db);

    {
        // 2. open the blob field in the newly inserted record
        // 3. wrap the blob handle in a custom `blob_buf` object that derives from `std::basic_streambuf<>` and can be used with `std::ostream` to write to that blob
        blob_buf buf(OpenBlobByRowId(db, inserted));
        std::ostream writer(&buf); // this stream now writes to the blob!

        // 4. serialize some data into the `ostream`
        auto payload = CanBeSerialized { "hello world", { 1, 2, 3.4, 1e7, -42.42 } };

        boost::archive::text_oarchive oa(writer);
        oa << payload;

#if 0   // used for testing with larger data
        std::ifstream ifs("test.cpp");
        writer << ifs.rdbuf();
#endif

        // 5. flush
        writer.flush();

        // 6. destruct/cleanup 
    }

    sqlite3_close(db);
    // ==7653== HEAP SUMMARY:
    // ==7653==     in use at exit: 0 bytes in 0 blocks
    // ==7653==   total heap usage: 227 allocs, 227 frees, 123,540 bytes allocated
    // ==7653== 
    // ==7653== All heap blocks were freed -- no leaks are possible
}

You'll recognize the steps outlined.

To test it, assume you create a new sqlite database:

sqlite3 test.sqlite3 <<< "CREATE TABLE DEMO(ID INTEGER PRIMARY KEY AUTOINCREMENT, FILE BLOB);"

Now, once you have run the program, you can query for it:

sqlite3 test.sqlite3 <<< "SELECT * FROM DEMO;"
1|22 serialization::archive 10 0 0 11 hello world 5 0 1 2 3.3999999999999999 10000000 -42.420000000000002

If you enable the test code (that puts more data than the blob_size allows) you'll see the blob getting truncated:

contents truncated at 256 bytes

Full Program

#include <sqlite3.h>
#include <string>
#include <iostream>
#include <ostream>
#include <fstream>
#include <boost/serialization/vector.hpp>
#include <boost/archive/text_oarchive.hpp>

template<typename CharT, typename TraitsT = std::char_traits<CharT> >
class basic_blob_buf : public std::basic_streambuf<CharT, TraitsT> 
{
    sqlite3_blob* _blob; // owned
    int max_blob_size;

    typedef std::basic_streambuf<CharT, TraitsT> base_type;
    enum { BUFSIZE = 10 }; // Block size - tuning?
    char buf[BUFSIZE+1/*for the overflow character*/];

    size_t cur_offset;
    std::ostream debug;

    // no copying
    basic_blob_buf(basic_blob_buf const&)             = delete;
    basic_blob_buf& operator= (basic_blob_buf const&) = delete;
public:
    basic_blob_buf(sqlite3_blob* blob, int max_size = -1) 
        : _blob(blob), 
        max_blob_size(max_size), 
        buf {0}, 
        cur_offset(0),
        // debug(std::cerr.rdbuf()) // or just use `nullptr` to suppress debug output
        debug(nullptr)
    {
        debug.setf(std::ios::unitbuf);
        if (max_blob_size == -1) {
            max_blob_size = sqlite3_blob_bytes(_blob);
            debug << "max_blob_size detected: " << max_blob_size << "\n";
        }
        this->setp(buf, buf + BUFSIZE);
    }

    int overflow (int c = base_type::traits_type::eof())
    {
        auto putpointer = this->pptr();
        if (c!=base_type::traits_type::eof())
        {
            // add the character - even though pptr might be epptr
            *putpointer++ = c;
        }

        if (cur_offset >= size_t(max_blob_size))
            return base_type::traits_type::eof(); // signal failure

        size_t n = std::distance(this->pbase(), putpointer);
        debug << "Overflow " << n << " bytes at " << cur_offset << "\n";
        if (cur_offset+n > size_t(max_blob_size))
        {
            std::cerr << "contents truncated at " << max_blob_size << " bytes\n";
            n = size_t(max_blob_size) - cur_offset;
        }

        if (SQLITE_OK != sqlite3_blob_write(_blob, this->pbase(), n, cur_offset))
        {
            debug << "sqlite3_blob_write reported an error\n";
            return base_type::traits_type::eof(); // signal failure
        }

        cur_offset += n;

        if (this->pptr() > (this->pbase() + n))
        {
            debug << "pending data has not been written";
            return base_type::traits_type::eof(); // signal failure
        }

        // reset buffer
        this->setp(buf, buf + BUFSIZE);

        return base_type::traits_type::not_eof(c);
    }

    int sync()
    {
        return base_type::traits_type::eof() != overflow();
    }

    ~basic_blob_buf() { 
        sqlite3_blob_close(_blob);
    }
};

typedef basic_blob_buf<char> blob_buf;

struct CanBeSerialized
{
    std::string sometext;
    std::vector<double> a_vector;

    template<class Archive>
    void serialize(Archive & ar, const unsigned int version)
    {
        ar & boost::serialization::make_nvp("sometext", sometext);
        ar & boost::serialization::make_nvp("a_vector", a_vector);
    }
};

#define MAX_BLOB_SIZE 256

sqlite3_int64 InsertRecord(sqlite3* db)
{
    sqlite3_stmt *stmt = NULL;
    int rc = sqlite3_prepare_v2(db, "INSERT INTO DEMO(ID, FILE) VALUES(NULL, ?)", -1, &stmt, NULL);

    if (rc != SQLITE_OK) {
        std::cerr << "prepare failed: " << sqlite3_errmsg(db) << "\n";
        exit(255);
    } else {
        rc = sqlite3_bind_zeroblob(stmt, 1, MAX_BLOB_SIZE);
        if (rc != SQLITE_OK) {
            std::cerr << "bind_zeroblob failed: " << sqlite3_errmsg(db) << "\n";
            exit(255);
        }
        rc = sqlite3_step(stmt);
        if (rc != SQLITE_DONE)
        {
            std::cerr << "execution failed: " << sqlite3_errmsg(db) << "\n";
            exit(255);
        }
    }
    rc = sqlite3_finalize(stmt);
    if (rc != SQLITE_OK)
    {
        std::cerr << "finalize stmt failed: " << sqlite3_errmsg(db) << "\n";
        exit(255);
    }

    return sqlite3_last_insert_rowid(db);
}

sqlite3_blob* OpenBlobByRowId(sqlite3* db, sqlite3_int64 rowid)
{
    sqlite3_blob* pBlob = NULL;
    int rc = sqlite3_blob_open(db, "main", "DEMO", "FILE", rowid, 1/*rw*/, &pBlob);

    if (rc != SQLITE_OK) {
        std::cerr << "blob_open failed: " << sqlite3_errmsg(db) << "\n";
        exit(255);
    }
    return pBlob;
}

int main()
{
    sqlite3 *db = NULL;
    int rc = sqlite3_open_v2("test.sqlite3", &db, SQLITE_OPEN_READWRITE, NULL);
    if (rc != SQLITE_OK) {
        std::cerr << "database open failed: " << sqlite3_errmsg(db) << "\n";
        exit(255);
    }

    // 1. insert a record into a table, binding a "zero-blob" of a certain (fixed) size
    sqlite3_int64 inserted = InsertRecord(db);

    {
        // 2. open the blob field in the newly inserted record
        // 3. wrap the blob handle in a custom `blob_buf` object that derives from `std::basic_streambuf<>` and can be used with `std::ostream` to write to that blob
        blob_buf buf(OpenBlobByRowId(db, inserted));
        std::ostream writer(&buf); // this stream now writes to the blob!

        // 4. serialize some data into the `ostream`
        auto payload = CanBeSerialized { "hello world", { 1, 2, 3.4, 1e7, -42.42 } };

        boost::archive::text_oarchive oa(writer);
        oa << payload;

#if 0   // used for testing with larger data
        std::ifstream ifs("test.cpp");
        writer << ifs.rdbuf();
#endif

        // 5. flush
        writer.flush();

        // 6. destruct/cleanup 
    }

    sqlite3_close(db);
}

PS. I've kept error handling... very crude. You'll want to introduce a helper function to check sqlite3 errorcodes and translate into exception(s) maybe. :)

查看更多
登录 后发表回答