Is boost::interprocess ready for prime time? [clos

2020-06-03 00:55发布

问题:

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 7 years ago.

I was working on a thread safe queue backed by memory mapped files which utilized boost interprocess fairly heavily. I submitted it for code review and a developer with more years of experience than I have on this planet said he didn't feel that boost::interprocess was "ready for prime time" and that I should just use pthreads directly.

I think that's mostly FUD. I personally think it's beyond ridiculous to go about reimplementing things such as upgradable_named_mutex or boost::interprocess::deque, but I'm curious to know what other people think. I couldn't find any data to back up his claim, but maybe I'm just uninformed or naive. Stackoverflow enlighten me!

回答1:

I attempted to use boost::interprocess for a project and came away with mixed feelings. My main misgiving is the design of boost::offset_ptr and how it handles NULL values -- in short, boost::interprocess can make diagnosing NULL pointers mistakes really painful. The issue is that a shared memory segment is mapped somewhere in the middle of the address space of your process, which means that "NULL" offset_ptr's, when dereferenced, will point to a valid memory location, so your application won't segfault. This means that when your application finally does crash it may be long after the mistake is made, making things very tricky to debug.

But it gets worse. The mutexes and conditions that boost:::interprocess uses internally are stored at the beginning of the segment. So if you accidentally write to some_null_offset_ptr->some_member, you will start overwriting the internal machinery of the boost::interprocess segment and get totally weird and hard to understand behavior. Writing code that coordinates multiple processes and dealing with the possible race conditions can be tough on its own, so it was doubly maddening.

I ended up writing my own minimal shared memory library and using the POSIX mprotect system call to make the first page of my shared memory segments unreadable and unwritable, which made NULL bugs appear immediately (you waste a page of memory but such a small sacrifice is worth it unless you're on an embedded system). You could try using boost::interprocess but still manually calling mprotect, but that won't work because boost will expect it can write to that internal information it stores at the beginning of the segment.

Finally, offset_ptr's assume that you are storing pointers within a shared memory segment to other points in the same shared memory segment. If you know that you are going to have multiple shared memory segments (I knew this would be the case because for me because I had one writable segment and 1 read only segment) which will store pointers into one another, offset_ptr's get in your way and you have to do a bunch of manual conversions. In my shared memory library I made a templated SegmentPtr<i> class where SegmentPtr<0> would be pointers into one segment, SegmentPtr<1> would be pointers into another segment, etc. so that they could not be mixed up (you can only do this though if you know the number of segments at compile time).

You need to weigh the cost of implementing everything yourself versus the extra debugging time you're going to spend tracking down NULL errors and potentially mixing up pointers to different segments (the latter isn't necessarily an issue for you). For me it was worth it to implement things myself, but I wasn't making heavy use of the data structures boost::interprocess provides, so it was clearly worth it. If the library is allowed to be open source in the future (not up to me) I'll update with a link but for now don't hold your breath ;p

In regards to your coworker though: I didn't experience any instability or bugs in boost::interprocess itself. I just think its design makes it harder to find bugs in your own code.



回答2:

We've been using boost::interprocess shared memory and interprocess.synchronization_mechanisms.message_queue for about 6 months now and found the code to be reliable, stable and fairly easy to use.

We keep our data in fairly simple fixed sized struct's (though 12 regions totaling 2+gb in size) and we used the boost::interprocess example code as is and had almost no problems.

We did find two items to watch out for when using boost::interprocess with windows.

  1. Review Boost Shared Memory & Windows. If you use the default #include <boost/interprocess/shared_memory_object.hpp> objects, then you can only increase the size of the memory mapped region by rebooting Windows first. That is because of how boost uses a file backing store.
  2. The message_queue class uses the default shared_memory_object. So if the message size needs to be increased, reboot Windows time again.

I'm not trying to say that Joseph Garvin's post about his problems with boost::interprocess were not valid. I think differences in our experiences are related to using different aspects of the library. I do agree with him that there do not appear to be any stability issues in boost::interprocess.