I'm a little confused as to the proper use of critical sections in multithreaded applications. In my application there are several objects (some circular buffers and a serial port object) that are shared among threads. Should access of these objects always be placed within critical sections, or only at certain times? I suspect only at certain times because when I attempted to wrap each use with an EnterCriticalSection
/ LeaveCriticalSection
I ran into what seemed to be a deadlock condition. Any insight you may have would be appreciated. Thanks.
相关问题
- Sorting 3 numbers without branching [closed]
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
- What uses more memory in c++? An 2 ints or 2 funct
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- How to add external file to application files ( cl
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Difference between Thread#run and Thread#wakeup?
- Selecting only the first few characters in a strin
- Java/Spring MVC: provide request context to child
Use a C++ wrapper around the critical section which supports RAII:
The constructor for the lock acquires the mutex and the destructor releases the mutex even if an exception is thrown.
Try not to gain more more than one lock at a time, and try to avoid calling functions outside of your class while holding locks; this helps avoid gaining locks in different places, so you tend to get fewer possibilities for deadlocks.
If you must gain more than one lock at the same time, sort the locks by their address and gain them in order. That way multiple processes gain the same locks in the same order without co-ordination.
With an IO port, consider whether you need to lock output at the same time as input - often you have a case where something tries to write, then expects to read, or visa-versa. If you have two locks, then you can get a deadlock if one thread writes then reads, and the other reads then writes. Often having one thread which does the IO and a queue of requests solves that, but that's a bit more complicated than just wrapping calls up with locks, and without much more detail I can't recommend it.
If you share a resource across threads, and some of those threads read while others write, then it must be protected always.
It's hard to give any more advice without knowing more about your code, but here are some general points to keep in mind.
1) Critical sections protect resources, not processes.
2) Enter/leave critical sections in the same order across all threads. If thread A enters Foo, then enters Bar, then thread B must enter Foo and Bar in the same order. If you don't, you could create a race.
3) Entering and leaving must be done in opposite order. Example, since you entered Foo then entered Bar, you must leave Bar before leaving Foo. If you don't do this, you could create a deadlock.
4) Keep locks for the shortest time period reasonably possible. If you're done with Foo before you start using Bar, release Foo before grabbing Bar. But you still have to keep the ordering rules in mind from above. In every thread that uses both Foo and Bar, you must acquire and release in the same order:
5) If you only read 99.9% of the time and write 0.1% of the time, don't try to be clever. You still have to enter the crit sec even when you're only reading. This is because you don't want a write to start when your'e in the middle of a read.
6) Keep the critical sections granular. Each critical section should protect one resource, not multiple resources. If you make the critical sections too "big", you could serialize your application or create a very mysterious set of deadlocks or races.