可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm working on a small function, that gives my users a picture of how occupied the CPU is.
I'm using cat /proc/loadavg
, wich returns the wellknown 3 numbers.
My problem is that the CPU doesn't do anything right now, while I'm developing.
Is there a good way to generate som load on the CPU, I was thinking something like makecpudosomething 30
, for a load of 0.3 or similar. Does an application like this exist?
Also, are there any way to eat up RAM in a controlled fashion?
Thanks
Michael
回答1:
I didn't understand very well if you want to generate arbitrary CPU load or CPU utilization. Yes, they are different things indeed. I'll try to cover both problems.
First of all: load is the average number of processes in the running, runnable or waiting for CPU scheduler queues in a given amount of time, "the one that wants your CPU" so to speak.
So, if you want to generate arbitrary load (say 0.3) you have to run a process for 30% of the time and then remove it from the run queue for 70% of the time, moving it to the sleeping queue or killing it, for example.
You can try this script to do that:
export LOAD=0.3
while true
do yes > /dev/null &
sleep $LOAD
killall yes
sleep `echo "1 - $LOAD" | bc`
done
Note that you have to wait some time (1, 10 and 15 minutes) to get the respective numbers to come up, and it will be influenced by other processes in your system. The more busy your system is the more this numbers will float. The last number (15 minutes interval) tends to be the most accurate.
CPU usage is, instead, the amount of time for which CPU was used for processing instructions of a computer program.
So, if you want to generate arbitrary CPU usage (say 30%) you have to run a process that is CPU bound 30% of the time and sleeps 70% of it.
I wrote an example to show you that:
#include <stdlib.h>
#include <unistd.h>
#include <err.h>
#include <math.h>
#include <sys/time.h>
#include <stdarg.h>
#include <sys/wait.h>
#define CPUUSAGE 0.3 /* set it to a 0 < float < 1 */
#define PROCESSES 1 /* number of child worker processes */
#define CYCLETIME 50000 /* total cycle interval in microseconds */
#define WORKTIME (CYCLETIME * CPUUSAGE)
#define SLEEPTIME (CYCLETIME - WORKTIME)
/* returns t1-t2 in microseconds */
static inline long timediff(const struct timeval *t1, const struct timeval *t2)
{
return (t1->tv_sec - t2->tv_sec) * 1000000 + (t1->tv_usec - t2->tv_usec);
}
static inline void gettime (struct timeval *t)
{
if (gettimeofday(t, NULL) < 0)
{
err(1, "failed to acquire time");
}
}
int hogcpu (void)
{
struct timeval tWorkStart, tWorkCur, tSleepStart, tSleepStop;
long usSleep, usWork, usWorkDelay = 0, usSleepDelay = 0;
do
{
usWork = WORKTIME - usWorkDelay;
gettime (&tWorkStart);
do
{
sqrt (rand ());
gettime (&tWorkCur);
}
while ((usWorkDelay = (timediff (&tWorkCur, &tWorkStart) - usWork)) < 0);
if (usSleepDelay <= SLEEPTIME)
usSleep = SLEEPTIME - usSleepDelay;
else
usSleep = SLEEPTIME;
gettime (&tSleepStart);
usleep (usSleep);
gettime (&tSleepStop);
usSleepDelay = timediff (&tSleepStop, &tSleepStart) - usSleep;
}
while (1);
return 0;
}
int main (int argc, char const *argv[])
{
pid_t pid;
int i;
for (i = 0; i < PROCESSES; i++)
{
switch (pid = fork ())
{
case 0:
_exit (hogcpu ());
case -1:
err (1, "fork failed");
break;
default:
warnx ("worker [%d] forked", pid);
}
}
wait(NULL);
return 0;
}
If you want to eat up a fixed amount of RAM you can use the program in the cgkanchi's answer.
回答2:
while true;
do openssl speed;
done
also the stress program will let you load the cpu/mem/disk to the levels you want to simulate:
- http://weather.ou.edu/~apw/projects/stress/
stress is a deliberately simple
workload generator for POSIX systems.
It imposes a configurable amount of
CPU, memory, I/O, and disk stress on
the system. It is written in C, and is
free software licensed under the
GPLv2.
to maintain a particular level of cpu utilization, say 30%, try cpulimit:
- http://cpulimit.sourceforge.net/
it will adapt to the current system environment and adjust for any other activity on the system.
there's also a patch to the kernel for native cpu rate limits here: http://lwn.net/Articles/185489/
回答3:
To eat up a fixed amount of RAM, you could just:
#include <stdlib.h>
#include <string.h>
#define UNIX 1
//remove the above line if running under Windows
#ifdef UNIX
#include <unistd.h>
#else
#include <windows.h>
#endif
int main(int argc, char** argv)
{
unsigned long mem;
if(argc==1)
mem = 1024*1024*512; //512 mb
else if(argc==2)
mem = (unsigned) atol(argv[1]);
else
{
printf("Usage: loadmem <memory in bytes>");
exit(1);
}
char* ptr = malloc(mem);
while(1)
{
memset(ptr, 0, mem);
#ifdef UNIX
sleep(120);
#else
Sleep(120*1000);
#endif
}
}
The memset() call seems to be required, because at least on OS X, the actual memory doesn't seem to get used until it is actually initialised.
EDIT: Fixed in response to comment
回答4:
You can try http://code.google.com/p/stressapptest/
回答5:
Use "memtester" to do your memory regression tests in Linux.
回答6:
I took this program and modify the line: mem = 1024*1024*512; //512 mb
to say this: mem = 1*1024*1024*1024; //1 GB
and compile it.
$ gcc iss_mem.c -o iss_mem
And write a bash wrapper around the compiled version of the C program above. It helps me generate a lot of memory load in my server.
#!/bin/bash
# Author: Mamadou Lamine Diatta
# Senior Principal Consultant / Infrastructure Architect
# Email: diatta at post dot harvard dot edu
# --------------------------------------------------------------------------------------
# *************************************************************************************
memsize_kb=`grep -i MemTotal /proc/meminfo | awk '{print $2}'`
MemTotal=$(($memsize_kb*1024))
for i in `seq 1 50`
do
echo "`date +"%F:%H:%M:%S"` ----------------- Running [ $i ] iteration(s)"
MemToAlloc=$((1*1024*1204*1204))
# 1Gb of memory per iss_mem call
TRESHOLD=$(($MemTotal/$MemToAlloc))
# We are not supposed to make the system
# run out of memory
rand=1000
# High enough to force a new one
while (( $rand > $TRESHOLD ))
do
rand=$(($RANDOM/1000))
done
if [ $rand -eq 0 ]
then
rand=1
fi
echo `date +"%F:%H:%M:%S"` Running $rand iss_mem in parallel ...
for j in `seq 1 $rand`
do
${ISSHOME}/bin/iss_mem > /dev/null &
# NOTE: gcc iss_mem.c -o iss_mem
done
sleep 180
jobs -p
kill `jobs -p`
sleep 30
done
# -------------------------------------------------------------------------------------
# *************************************************************************************
回答7:
Very simple actually: install stress tool and do:
stress --vm X --vm-bytes YM
- replace X with the number of worker you want to spawn and "
malloc()
" your RAM
- replace Y with the amount of memory that each worker has to allocate
Example:
stress --vm 2 --vm-bytes 128M
回答8:
Have you considered using prime95?
I'm not sure if you can limit it to a percentage like that tho...
回答9:
mark@localhost$ time pi 1048576 | egrep '.*total$'
Is a simple benchmarking command that will give your cpu a rousting, post your times :D
回答10:
The simplest way I have found to load the RAM (and SWAP) is by using Perl:
my $allocation = "A" x (1024 * 1024 * $ARGV[0]);
print "\nAllocated " . length($allocation) . "\n";
回答11:
Hope this application will be useful:
https://www.devin.com/lookbusy/
Steps to install and usage is found in github project which uses lookbusy.
https://github.com/beloglazov/cpu-load-generator
Snippet from Github page:
To generate a sequence of 20%, 90%, and 50% CPU utilization for 20 seconds each on 2 cores using the test.data file,
python cpu-load-generator.py -n 2 20 test.data
回答12:
you can stress utility, as it is a workload generator tool designed to subject your system to a configurable measure of CPU, memory, I/O and disk stress.
To run 1 vm stressor using 1GB of virtual memory for 60s, enter:
stress --vm 1 --vm-bytes 1G --vm-keep -t 60s
回答13:
1G memory
python -c 'a="a"*1024**3;raw_input()'