Limiting certain processes to CPU % - Linux

2019-01-21 09:17发布

I have the following problem: some processes, generated dynamically, have a tendency to eat 100% of CPU. I would like to limit all the process matching some criterion (e.g. process name) to a certain amount of CPU percentage.

The specific problem I'm trying to solve is harnessing folding@home worker processes. The best solution I could think of is a perl script that's executed periodically and uses the cpulimit utility to limit the processes (if you're interested in more details, check this blog post). It works, but it's a hack :/

Any ideas? I would like to leave the handling of processes to the OS :)


Thanks again for the suggestions, but we're still missing the point :)

The "slowDown" solution is essentially what the "cpulimit" utility does. I still have to take care about what processes to slow down, kill the "slowDown" process once the worker process is finished and start new ones for new worker processes. It's precisely what I did with the Perl script and a cron job.

The main problem is that I don't know beforehand what processes to limit. They are generated dynamically.

Maybe there's a way to limit all the processes of one user to a certain amount of CPU percentage? I already set up a user for executing the folding@home jobs, hoping that i could limit him with the /etc/security/limits.conf file. But the nearest I could get there is the total CPU time per user...

It would be cool if to have something that enables you to say: "The sum of all CPU % usage of this user's processes cannot exceed 50%". And then let the processes fight for that 50% of CPU regarding to their priorities...


Guys, thanks for your suggestions, but it's not about priorities - I want to limit the CPU % even when there's plenty of CPU time available. The processes are already low priority, so they don't cause any performance issues.

I would just like to prevent the CPU from running on 100% for extended periods...

标签: linux limit cpu
15条回答
一纸荒年 Trace。
2楼-- · 2019-01-21 10:10

This can be done using setrlimit(2) (specifically by setting RLIMIT_CPU parameter).

查看更多
【Aperson】
3楼-- · 2019-01-21 10:12

I also had the need to limit CPU time for certain processes. Cpulimit is a good tool, but I always had to manually find out the PID and manually start cpulimit, so I wanted something more convenient.

I came up with this bash script:

#!/bin/bash

function lp
{

ps aux | grep $1 | termsql "select COL1 from tbl" > /tmp/tmpfile

while read line
do
    TEST=`ps aux | grep "cpulimit -p $line" | wc -l`
    [[ $TEST -eq "2" ]] && continue #cpulimit is already running on this process
    cpulimit -p $line -l $2
done < /tmp/tmpfile

}

while true
do
    lp gnome-terminal 5
    lp system-journal 5
    sleep 10
done

This example limits the cpu time of each gnome-terminal instance to 5% and the cpu time of each system-journal instance to 5% as well.

I used another script I created named termsql in this example to extract the PID . You can get it here: https://gitorious.org/termsql/termsql/source/master:

查看更多
贼婆χ
4楼-- · 2019-01-21 10:12

I see at least two options:

  • Use "ulimit -t" in the shell that creates your process
  • Use "nice" at process creation or "renice" during runtime
查看更多
登录 后发表回答