我想知道你怎么能使用Java API测量磁盘速度。
随机读,顺序读取和随机和顺序写入。
如果有人认为它不是一个真正的问题。 关闭之前,请解释一下如此。
谢谢
我想知道你怎么能使用Java API测量磁盘速度。
随机读,顺序读取和随机和顺序写入。
如果有人认为它不是一个真正的问题。 关闭之前,请解释一下如此。
谢谢
你可以看看磁盘实用程序我在java中写道。 它可能不是超级幻想,但它的工作原理。
https://sourceforge.net/projects/jdiskmark/
这里是写入测量代码的一个片段:
try (RandomAccessFile rAccFile = new RandomAccessFile(testFile,mode)) {
for (int b=0; b<numOfBlocks; b++) {
if (App.randomEnable) {
int rLoc = Util.randInt(0, numOfBlocks-1);
rAccFile.seek(rLoc*blockSize);
} else {
rAccFile.seek(b*blockSize);
}
rAccFile.write(blockArr, 0, blockSize);
totalBytesWrittenInMark += blockSize;
wUnitsComplete++;
unitsComplete = rUnitsComplete + wUnitsComplete;
percentComplete = (float)unitsComplete/(float)unitsTotal * 100f;
}
}
long endTime = System.nanoTime();
long elapsedTimeNs = endTime - startTime;
double sec = (double)elapsedTimeNs / (double)1000000000;
double mbWritten = (double)totalBytesWrittenInMark / (double)MEGABYTE;
long bwMbSec = mbWritten / sec;
System.out.println("Write IO is " + bwMbSec + " MB/s"
+ "(MB written " + mbWritten + " in " + sec + " sec)");
该代码是gitlab: https://gitlab.com/jamesmarkchan/jDiskMark/
I would create a big file in the disk to assure you reserve always the same space and then proceed with the tests, i.e. reading/writing chunks of the file with RFA, varying the size of those chunks. I would also vary the position of the chunks (randomly or going back an forth the init and end of the file). All this will let you measure the average transfer rate for certain situations depending on the size of the chunk, etc.
Take into consideration however, no matter if you use java or simple C, that you do not have access to low level file organization. You could be using a HD that is fragmented (the emptier the this the more unlikely it is very fragmented). That information is only available at kernel level (not user-land level). However if you carry out a lot of tests (not one read of one part of the file, but many along all the file space) and the disk is considerably empty (to assure that the block on the disk does not have a lot of fragmentation), you could statistically get quite reasonable indicator of the read/write speed.
Another important issue I had forgotten that will impact your measurements is cache. The fist time you read a chunk from a file you are likely not to be using cache, but not the second time. For that I recommend you to read the approaches of this other question.
你有没有想过用卡尺 ?
从该网站:
显卡是谷歌的写作开源框架,运行和查看JavaMicrobenchmarks的结果。
0.5版本是周围的边缘有点粗糙,但我们已经发现它非常有用,并且API将保持基本稳定。
最简单的完整的显卡基准如下:
public class MyBenchmark extends SimpleBenchmark {
public void timeMyOperation(int reps) {
for (int i = 0; i < reps; i++) {
MyClass.myOperation();
}
}
}