Suppose that we have std::map
container and we want to make it thread safe in terms of insert, erase, search and edit records. At the same time we want the threads can work with different records in parallel (read and edit records). To do this, I made a separate class for record - edit operation, which protected with mutex.
class Data
{
public:
Data(const std::string& data) : _mutex(), _data(data) { }
void setData(const std::string& data)
{
std::lock_guard<std::mutex> locker(_mutex);
_data = data;
}
const std::string& getData() const { return _data; }
private:
std::mutex _mutex;
std::string _data;
};
class Storage
{
public:
void insertData(size_t key, const std::string& data)
{
std::lock_guard<std::mutex> locker(_mutex);
_storage[key] = data;
}
void eraseData(size_t key)
{
std::lock_guard<std::mutex> locker(_mutex);
_storage.erase(key);
}
const std::string& getData(size_t key) const { return _storage[key].getData(); }
void setData(size_t key, const std::string& data) { _storage[key].setData(data); }
private:
std::mutex _mutex;
std::map<size_t, Data> _storage;
};
Now suppose that the thread grabbs "local" mutex of the some record to edit (Data::setData
method call). In the same time, other thread grabbs "global" mutex to delete this record (Storage::eraseData
method call) - are there any problems? What other problems are possible in this code?
Solve your concurrency problems first. This is a C++14 solution, because the C++11 version is much more verbose, and we don't have all the locking primitives we want:
template<class T>
struct thread_safe {
template<class F>
auto read( F&& f ) const {
std::shared_lock<decltype(mutex)> lock(mutex);
return std::forward<F>(f)(t);
}
template<class F>
auto write( F&& f ) {
std::unique_lock<decltype(mutex)> lock(mutex);
return std::forward<F>(f)(t);
}
template<class O>
thread_safe(O&&o):t(std::forward<O>(o)) {}
thread_safe() = default;
operator T() const {
return o.read([](T const& t){return t;});
}
// it is really this simple:
thread_safe( thread_safe const& o ):t( o ) {}
// forward to above thread safe copy ctor:
thread_safe( thread_safe & o ):thread_safe( const_cast<thread_safe const&>(o) ) {}
thread_safe( thread_safe && o ):thread_safe(o) {}
thread_safe( thread_safe const&& o ):thread_safe(o) {}
thead_safe& operator=( thread_safe const& o ) {
write( [&o](auto& target) {
target = o;
});
return *this;
}
template<class O>
thread_safe& operator=( O&& o ) {
write([&o](auto& t){ t = std::forward<O>(o); });
return *this;
}
private:
T t;
mutable std::shared_timed_mutex mutex;
};
this is a thread safety wrapper around an arbitrary class.
We can use this directly:
typedef thread_safe< std::map< size_t, thread_safe<std::string> > > my_map;
here we have our two level thread safe map.
Example use, setting entry 33 to "hello"
:
my_map.write( [&](auto&& m){
m[33] = "hello";
} );
this has many-readers, single-writer on each element and on the map as a whole. Returning an iterator from a read
or write
call is not safe.
Naturally you should test and audit the above code. I didn't.
The core idea is pretty simple. To read, you have to .read
the thread safe object. The lambda you pass in gets a const&
to the underlying data. On std::
data, those are guaranteed to be multi-reader safe.
To write, you must .write
. This gets an exclusive lock, blocking out other .read
s. The lambda here gets a &
to the underlying data.
I added operator T
and =
and copy-construct to make the type more regular. The cost of this is that you can accidentally generate a lot of lock/unlock behavior. The advantage is that m[33] = "hello"
just works, which is awesome.
You have two huge problems:
What happens if one thread calls insertData
at the same time another thread calls getData
? The call to operator[]
can crash because the map is being modified while it's trying to access it.
What happens if one thread calls eraseData
while another thread is still using the reference it got back from getData
? The reference could become invalid, causing a crash.