What are the common causes for high CPU usage?

2019-01-22 03:57发布

Background:

In my application written in C++, I have created 3 threads:

  • AnalysisThread (or Producer) : it reads an input file, parses it, and generates patterns, and enqueue them into std::queue1.
  • PatternIdRequestThread (or Consumer) : it deque patterns from the queue, and sends them, one by one, to database through a client (written in C++), which returns pattern uid which is then assigned to the corresponding pattern.
  • ResultPersistenceThread : it does few more things, talks to database, and it works fine as expected, as far as CPU usage is concerned.

First two threads take 60-80% of CPU usage, each takes 35% on average.

Question:

I don't understand why some threads take high CPU usage.

I analyse it as follows : if it is the OS who makes decisions like context-switch, interrupt, and scheduling as to which thread should be given access to system resources, such as CPU time, then how come some threads in a process happen to use more CPU than the others? It looks like some threads forcefully takes CPU from the OS at gunpoint, or the OS has a real soft spot for some threads and so it is biased towards them from the very beginning, giving them all the resources it has. Why can't it be impartial and give them all equally?

I know that it is naive. But I get confused more if I think along this line : the OS gives access to CPU to a thread, based on the amount of work to be done by the thread, but how does the OS compute or predict the amount of work before executing it completely?

I wonder what are the causes for high CPU usage? How can we identify them? Is it possible to identify them just by looking at the code? What are the tools?

I'm using Visual Studio 2010.

1. I've my doubt about std::queue as well. I know that standard containers aren't thread safe. But if exactly one thread enqueue items to queue, then is it safe if exactly one thread deque items from it? I imagine it be like a pipe, on one side you insert data, on the other, you remove data, then why would it be unsafe if its done simultenously? But that is not the real question in this topic, however, you can add a note in your answer, addressing this.

Updates:

After I realized that my consumer-thread was using busy-spin which I've fixed with Sleep for 3 seconds. This fix is temporary, and soon I will use Event instead. But even with Sleep, the CPU usage has dropped down to 30-40%, and occasionally it goes up to 50% which doesn't seem to be desirable from the usability point of view, as the system doesn't respond to other applications which the user is currently working with.

Is there any way that I can still improve on the high CPU usage? As said earlier, the producer thread (which now uses most of the CPU cycles) reads a file, parses packets (of some format) in it, and generates patterns out of them. If I use sleep, then the CPU usage would decrease but would it be a good idea? What are the common ways to solve it?

8条回答
霸刀☆藐视天下
2楼-- · 2019-01-22 04:23
  1. use asynchronous (file and socket) IO to reduce useless CPU waiting time.
  2. use vertical threading model to reduce context switch if possible
  3. use lock-less data structure
  4. use a profiling tool, such as VTune, to figure out the hot spot and make optimization
查看更多
姐就是有狂的资本
3楼-- · 2019-01-22 04:30

Personally I'd be pretty annoyed if my threads had work to do, and there were idle cores on my machine because the OS wasn't giving them high CPU usage. So I don't really see that there's any a problem here [Edit: turns out your busy looping is a problem, but in principle there's nothing wrong with high CPU usage].

The OS/scheduler pretty much doesn't predict the amount of work a thread will do. A thread is (to over-simplify) in one of three states:

  1. blocked waiting for something (sleep, a mutex, I/O, etc)
  2. runnable, but not currently running because other things are
  3. running.

The scheduler will select as many things to run as it has cores (or hyperthreads, whatever), and run each one either until it blocks or until an arbitrary period of time called a "timeslice" expires. Then it will schedule something else if it can.

So, if a thread spends most of its time in computation rather than blocking, and if there's a core free, then it will occupy a lot of CPU time.

There's a lot of detail in how the scheduler chooses what to run, based on things like priority. But the basic idea is that a thread with a lot to do doesn't need to be predicted as compute-heavy, it will just always be available whenever something needs scheduling, and hence will tend to get scheduled.

For your example loop, your code doesn't actually do anything, so you'd need to check how it has been optimized before judging whether 5-7% CPU makes sense. Ideally, on a two-core machine a processing-heavy thread should occupy 50% CPU. On a 4 core machine, 25%. So unless you have at least 16 cores then your result is at first sight anomalous (and if you had 16 cores, then one thread occupying 35% would be even more anomalous!). In a standard desktop OS most cores are idle most of the time, so the higher the proportion of CPU that your actual programs occupy when they run, the better.

On my machine I frequently hit one core's worth of CPU use when I run code that is mostly parsing text.

if exactly one thread enqueue items to queue, then is it safe if exactly one thread deque items from it?

No, that is not safe for std::queue with a standard container. std::queue is a thin wrapper on top of a sequence container (vector, deque or list), it doesn't add any thread-safety. The thread that adds items and the thread that removes items modify some data in common, for example the size field of the underlying container. You need either some synchronization, or else a safe lock-free queue structure that relies on atomic access to the common data. std::queue has neither.

查看更多
We Are One
4楼-- · 2019-01-22 04:30

Although the others have correctly analysed the problem already (as far as I can tell), let me try to add some more detail to the proposed solutions.

Firstly, to summarize the problems: 1. If you keep your consumer thread busy spinning in a for-loop or similar, that's a terrible waste of CPU power. 2. If you use the sleep() function with a fixed number of milliseconds, it is either a waste of CPU, too (if the time amount is too low), or you delay the process unnecessarily (if it's too high). There is no way to set the time amount just right.

What you need to do instead is to use a type of sleep that wakes up just at the right moment, i.e. whenever a new task has been appended to the queue.

I'll explain how to do this using POSIX. I realize that's not ideal when you are on Windows, but, to benefit from it, you can either use POSIX libraries for Windows or use corresponding functions available in your environment.

Step 1: You need one mutex and one signal:

#include <pthread.h>
pthread_mutex_t *mutex  = new pthread_mutex_t;
pthread_cond_t  *signal = new pthread_cond_t;

/* Initialize the mutex and the signal as below.
   Both functions return an error code. If that
   is not zero, you need to react to it. I will
   skip the details of this. */
pthread_mutex_init(mutex,0);
pthread_cond_init(signal,0);

Step 2: Now inside the consumer thread, wait for the signal to be sent. The idea is that the producer sends the signal whenever it has appended a new task to the queue:

/* Lock the mutex. Again, this might return an error code. */
pthread_mutex_lock(mutex);

/* Wait for the signal. This unlocks the mutex and then 'immediately'
   falls asleep. So this is what replaces the busy spinning, or the
   fixed-time sleep. */
pthread_cond_wait(signal,mutex);

/* The program will reach this point only when a signal has been sent.
   In that case the above waiting function will have locked the mutex
   right away. We need to unlock it, so another thread (consumer or
   producer alike) can access the signal if needed.  */
pthread_mutex_unlock(mutex);

/* Next, pick a task from the queue and deal with it. */

Step 2 above should essentially be placed inside an infinite loop. Make sure there is a way for the process to break out of the loop. For example -- although slightly crude -- you can append a 'special' task to the queue that means 'break out of the loop'.

Step 3: Enable the producer thread to send a signal whenever it has appended a task to the queue:

/* We assume we are now in the producer thread and have just appended
   a task to the queue. */
/* First we lock the mutex. This must be THE SAME mutex object as used
   in the consumer thread. */
pthread_mutex_lock(mutex);

/* Then send the signal. The argument must also refer to THE SAME
   signal object as is used by the consumer. */
pthread_cond_signal(signal);

/* Unlock the mutex so other threads (producers or consumers alike) can
   make use of the signal. */
pthread_mutex_unlock(mutex);

Step 4: When everything is finished and you shut down your threads, you must destroy the mutex and the signal:

pthread_mutex_destroy(mutex);
pthread_cond_destroy(signal);
delete mutex;
delete signal;

Finally let me re-iterate one thing the others have said already: You must not use an ordinary std::deque for concurrent access. One way of solving this is to declare yet another mutex, lock it before every access to the deque, and unlock it right after.

Edit: A few more words about the producer thread, in light of the comments. As far as I understand it, the producer thread is currently free to add as many tasks to the queue as it can. So I suppose it will keep doing that and keep the CPU busy to the extent that it isn't delayed by IO and memory access. Firstly, I don't think of the high CPU usage resulting from this as a problem, but rather as a benefit. However, one serious concern is that the queue will grow indefinitely, potentially causing the process to run out of memory space. Hence a useful precaution to take would be to limit the size of the queue to a reasonable maximum, and have the producer thread pause whenever the queue grows too long.

To implement this, the producer thread would check the length of the queue before adding a new item. If it is full, it would put itself to sleep, waiting for a signal to be sent by a consumer when taking a task off the queue. For this you could use a secondary signal mechanism, analogous to the one described above.

查看更多
We Are One
5楼-- · 2019-01-22 04:32

As people have said, the right way to synchronize the hand-off between the producer and consumer threads would be to use a condition variable. When the producer wants to add an element to the queue, it locks the condition variable, adds the element, and notifies waiters on the condition variable. The consumer waits on the same condition variable, and when notified, consumes elements from the queue, then locks again. I'd personally recommend using boost::interprocess for these, but it can be done in a reasonably straightforward way using other APIs too.

Also, one thing to keep in mind is that while conceptually each thread is operating on one end of the queue only, most libraries implement an O(1) count() method, which means they have a member variable to track the number of elements, and this is an opportunity for rare and difficult-to-diagnose concurrency issues.

If you're looking for a way to reduce the cpu usage of the consumer thread (yes, I know this is your real question)... well, it sounds like it's actually doing what it's supposed to now, but the data processing is expensive. If you can analyze what it's doing, there may be opportunities for optimization.

If you want to throttle the producer thread intelligently... it's a little more work, but you could have the producer thread add items to the queue until it reaches a certain threshold (say 10 elements), then wait on a different condition variable. When the consumer consumes enough data that it causes the number of queued elements to go below a threshold (say 5 elements), then it notifies this second condition variable. If all parts of the system can move the data around quickly, then this could still consume a lot of CPU, but it would be spread relatively evenly amongst them. It's at this point that the OS should be responsible for letting other unrelated processes get their fair(ish) share of the CPU.

查看更多
闹够了就滚
6楼-- · 2019-01-22 04:37

Threads consume resources such as memory. A blocking/unblocking thread incurs a once off cost. If a thread blocking/unblocks tens of thousands of times per second this can waste significant amounts of CPU.

However once a thread is blocked, it doesn't matter how long it is blocked for, there is no ongoing cost. The popular way to find performance problems is to use profilers.

However, I do this a lot, and my method is this: http://www.wikihow.com/Optimize-Your-Program%27s-Performance

查看更多
淡お忘
7楼-- · 2019-01-22 04:42

Edit: Ok, since you are using busy spin to block on the queue, this is most likely the cause for high CPU usage. The OS is under the impression that your threads are doing useful work when they are actually not, so they get full CPU time. There was interesting discussion here: Which one is better for performance to check another threads boolean in java

I advise you to either switch to events or other blocking mechanisms or use some synchronized queue instead and see how it goes.

Also, that reasoning about the queue being thread-safe "because only two threads are using it" is very dangerous.

Assuming the queue is implemented as a linked list, imagine what can happen if it has only one or two elements remaining. Since you have no way of controlling the relative speeds of the producer and the consumer, this may well be the case and so you're in big trouble.

查看更多
登录 后发表回答