select a set of values from tuple with run-time in

2019-07-24 19:31发布

问题:

Short introduction to my questions: i'm trying to implement a "sort of" relational database using stl containers. This is just for fun/educational purpose, so no need for answers like "use this library", "this is absolutely useless" and so on. I know title is a little bit confusing at this point, but we will reach the point (suggestions for improvement to title are really welcome).

I proceeded with little steps:

  1. i can build table as vector of maps from columns name to their values => std::vector<std::map<std::string, some_variant>>. It's simple and it represents what i need.
  2. wait, i can just store column's names once and access values with their index. => std::vector<std::vector<some_variant>>.As simple as point 1, but faster than that.
  3. wait wait, in a database a table is literrally a sequence of tuple => std::vector<std::tuple<args...>>. This is cool, it represents exactly what i'm doing, correct type without variant and even faster than the other.

Note: the "faster than" was measured for 1000000 records with a simple loop like this:

std::random_device dev;
std::mt19937 gen(dev());
std::uniform_int_distribution<long> rand1_1000(1, 1000);
std::uniform_real_distribution<double> rand1_10(1.0, 10.0);

void fill_1()
{
    using my_variant = std::variant<long, long long, double, std::string>;
    using values = std::map<std::string, my_variant>;
    using table = std::vector<values>;

    table t;
    for (int i = 0; i < 1000000; ++i)
        t.push_back({ {"col_1", rand1_1000(gen)}, {"col_2", rand1_1000(gen)}, {"col_3", rand1_10(gen)} });
    std::cout << "size:" << t.size() << "\n";//just to prevent optimization
}
  1. 2234101600ns - avg:2234

  2. 446344100ns - avg:446

  3. 132075400ns - avg:132

INSERT: No problem with any of these solutions, insert are as simple as pushing back elements as in the example.

SELECT: 1 and 2 are simple, but 3 is tricky.

So, finally, questions:

  1. Memory usage: there is a lot of overhead using solution 1 and 2 in term of used memory. So, 3 seems to be again the right choice here. For the example with 1 million records of 2 longs and a double i was expecteing something near 4MB*2 for longs and 8MB for doubles plus some overhead for vectors, maps and variants where used. Instead we have (measured with windows task manager, not extremely accurate, i know):

    1.340 MB

    2.120 MB

    3.31 MB

    Am i missing something? Other than reserving the right size in advance or shrink_to_fit after the insert loop?

  2. Is there a way to run-time retrieve some tuple field as in the case of a select statement?

using my_tuple = std::tuple<long, long, string,  double>;
std::vector<my_tuple> table;
int to_select;//this could be a vector of columns to select obviosly
std::cin>>to_select;
auto result = select (table, to_select);

Do you see any chance to implement this last line in any way? We have two problem for what i see: the result type should take the the type from the starting tuple and then, actually perform the selection of desired fields.

I read a lot of answers about that, they all talk about contiguous indexes using make_index_sequence or complile-time known index. I also found this article, very interesting, but not really useful for this case.

回答1:

This is doable but it is strange:

template<size_t candidate, typename ...T>
constexpr std::variant<T...> helperTupleValueAt(const std::tuple<T...>& t, size_t index)
{
    if constexpr (candidate >= sizeof...(T)) {
        throw std::logic_error("out of bounds");
    } else {
        if (candidate == index) {
            return std::variant<T...>{ std::in_place_index<candidate>, std::get<candidate>(t) };
        } else {
            return helperTupleValueAt<candidate + 1>(t, index);
        }
    }
}

template<typename ...T>
std::variant<T...> tupleValueAt(const std::tuple<T...>& t, size_t index)
{
    return helperTupleValueAt<0>(t, index);
}

https://wandbox.org/permlink/FQJd4chAFVSg5eSy



回答2:

About memory usage.

In solution 1 you have 1 std::vector and 1 million std::map: the overhead is huge.

In solution 2 you have 1 + 1 million std::vector: the overhead is huge.
Assuming a vector is roughly made of 3 pointers (data, capacity, size), these 24 bytes are almost as big as the contents (3*(max(sizeof(long),sizeof(double))+sizeof(discriminant))).

In solution 3 you have 1 std::vector directly containing the useful data: the overhead is negligible.