Redis的性能VS磁盘缓存中的应用(Performance of Redis vs Disk in

2019-08-01 10:26发布

我想在Python中创建一个Redis的缓存,以及任何有自尊的科学家我做了一个基准点来测试性能。

有趣的是,Redis的没那么幸运。 无论是Python的是做一些魔术(存储文件)或我的Redis的版本是非常多产缓慢。

我不知道这是因为我的代码是结构化的方式,或者是什么,但我期待的Redis做的比它更好。

为了使Redis的缓存,设置我的二进制数据(在这种情况下,HTML页)从文件名派生用5分钟的过期的关键。

在所有情况下,文件处理与f.read()来完成(这是3倍〜比f.readlines()快了,我需要二进制BLOB)。

是否有什么我失踪在我的比较,或者是Redis的实在敌不过盘? 是Python的缓存文件的地方,每一次reaccessing呢? 为什么是这样的速度远远超过获得的Redis?

我使用Redis的2.8,Python 2.7版和Redis的-PY,所有64位的Ubuntu系统上。

我不认为Python是做什么特别神奇,因为我做的是存储在一个Python对象的文件数据的功能,并产生了它永远。

我有四个函数调用,我分组:

读文件X倍

被调用,以查看是否redis的对象仍处于存储器中的功能,将其加载,或缓存新的文件(单个和多个redis的实例)。

创建一个发生器,产生从redis的数据库的结果(与Redis的的单和多实例)的函数。

最后,存储在内存中的文件,并永远得到它。

import redis
import time

def load_file(fp, fpKey, r, expiry):
    with open(fp, "rb") as f:
        data = f.read()
    p = r.pipeline()
    p.set(fpKey, data)
    p.expire(fpKey, expiry)
    p.execute()
    return data

def cache_or_get_gen(fp, expiry=300, r=redis.Redis(db=5)):
    fpKey = "cached:"+fp

    while True:
        yield load_file(fp, fpKey, r, expiry)
        t = time.time()
        while time.time() - t - expiry < 0:
            yield r.get(fpKey)


def cache_or_get(fp, expiry=300, r=redis.Redis(db=5)):

    fpKey = "cached:"+fp

    if r.exists(fpKey):
        return r.get(fpKey)

    else:
        with open(fp, "rb") as f:
            data = f.read()
        p = r.pipeline()
        p.set(fpKey, data)
        p.expire(fpKey, expiry)
        p.execute()
        return data

def mem_cache(fp):
    with open(fp, "rb") as f:
        data = f.readlines()
    while True:
        yield data

def stressTest(fp, trials = 10000):

    # Read the file x number of times
    a = time.time()
    for x in range(trials):
        with open(fp, "rb") as f:
            data = f.read()
    b = time.time()
    readAvg = trials/(b-a)


    # Generator version

    # Read the file, cache it, read it with a new instance each time
    a = time.time()
    gen = cache_or_get_gen(fp)
    for x in range(trials):
        data = next(gen)
    b = time.time()
    cachedAvgGen = trials/(b-a)

    # Read file, cache it, pass in redis instance each time
    a = time.time()
    r = redis.Redis(db=6)
    gen = cache_or_get_gen(fp, r=r)
    for x in range(trials):
        data = next(gen)
    b = time.time()
    inCachedAvgGen = trials/(b-a)


    # Non generator version    

    # Read the file, cache it, read it with a new instance each time
    a = time.time()
    for x in range(trials):
        data = cache_or_get(fp)
    b = time.time()
    cachedAvg = trials/(b-a)

    # Read file, cache it, pass in redis instance each time
    a = time.time()
    r = redis.Redis(db=6)
    for x in range(trials):
        data = cache_or_get(fp, r=r)
    b = time.time()
    inCachedAvg = trials/(b-a)

    # Read file, cache it in python object
    a = time.time()
    for x in range(trials):
        data = mem_cache(fp)
    b = time.time()
    memCachedAvg = trials/(b-a)


    print "\n%s file reads: %.2f reads/second\n" %(trials, readAvg)
    print "Yielding from generators for data:"
    print "multi redis instance: %.2f reads/second (%.2f percent)" %(cachedAvgGen, (100*(cachedAvgGen-readAvg)/(readAvg)))
    print "single redis instance: %.2f reads/second (%.2f percent)" %(inCachedAvgGen, (100*(inCachedAvgGen-readAvg)/(readAvg)))
    print "Function calls to get data:"
    print "multi redis instance: %.2f reads/second (%.2f percent)" %(cachedAvg, (100*(cachedAvg-readAvg)/(readAvg)))
    print "single redis instance: %.2f reads/second (%.2f percent)" %(inCachedAvg, (100*(inCachedAvg-readAvg)/(readAvg)))
    print "python cached object: %.2f reads/second (%.2f percent)" %(memCachedAvg, (100*(memCachedAvg-readAvg)/(readAvg)))

if __name__ == "__main__":
    fileToRead = "templates/index.html"

    stressTest(fileToRead)

而现在的结果:

10000 file reads: 30971.94 reads/second

Yielding from generators for data:
multi redis instance: 8489.28 reads/second (-72.59 percent)
single redis instance: 8801.73 reads/second (-71.58 percent)
Function calls to get data:
multi redis instance: 5396.81 reads/second (-82.58 percent)
single redis instance: 5419.19 reads/second (-82.50 percent)
python cached object: 1522765.03 reads/second (4816.60 percent)

结果是在一个有趣)发电机比调用函数每次,b)中的Redis比从磁盘读取速度较慢,和c)从Python对象读数是可笑快快。

从磁盘读取,为什么会有那么多的速度比从Redis的内存中的文件中读取?

编辑:一些更多的信息和测试。

我取代了功能

data = r.get(fpKey)
if data:
    return r.get(fpKey)

结果没有什么不同,从多

if r.exists(fpKey):
    data = r.get(fpKey)


Function calls to get data using r.exists as test
multi redis instance: 5320.51 reads/second (-82.34 percent)
single redis instance: 5308.33 reads/second (-82.38 percent)
python cached object: 1494123.68 reads/second (5348.17 percent)


Function calls to get data using if data as test
multi redis instance: 8540.91 reads/second (-71.25 percent)
single redis instance: 7888.24 reads/second (-73.45 percent)
python cached object: 1520226.17 reads/second (5132.01 percent)

创建每个函数调用一个新的Redis实例却没有对阅读速度的影响noticable,从测试到测试的变异性比收益更大。

Sripathi克里希南建议实施的随机读取文件。 这是缓存开始真正的帮助,因为我们可以从这些结果中看到的。

Total number of files: 700

10000 file reads: 274.28 reads/second

Yielding from generators for data:
multi redis instance: 15393.30 reads/second (5512.32 percent)
single redis instance: 13228.62 reads/second (4723.09 percent)
Function calls to get data:
multi redis instance: 11213.54 reads/second (3988.40 percent)
single redis instance: 14420.15 reads/second (5157.52 percent)
python cached object: 607649.98 reads/second (221446.26 percent)

有变化的文件数量巨大读取这样的百分比差别不是加速的良好指标。

Total number of files: 700

40000 file reads: 1168.23 reads/second

Yielding from generators for data:
multi redis instance: 14900.80 reads/second (1175.50 percent)
single redis instance: 14318.28 reads/second (1125.64 percent)
Function calls to get data:
multi redis instance: 13563.36 reads/second (1061.02 percent)
single redis instance: 13486.05 reads/second (1054.40 percent)
python cached object: 587785.35 reads/second (50214.25 percent)

我用random.choice(的fileList)随机地选择一个新的文件在每个通过的功能。

完整的要点是这里如果有人想尝试一下- https://gist.github.com/3885957

编辑编辑:不知道,我呼吁生成一个单一的文件(尽管函数调用和发电机的性能是非常相似)。 下面是从生成不同文件的结果也是如此。

Total number of files: 700
10000 file reads: 284.48 reads/second

Yielding from generators for data:
single redis instance: 11627.56 reads/second (3987.36 percent)

Function calls to get data:
single redis instance: 14615.83 reads/second (5037.81 percent)

python cached object: 580285.56 reads/second (203884.21 percent)

Answer 1:

这是苹果和橘子比较。 见http://redis.io/topics/benchmarks

Redis的是一种有效的远程数据存储。 每一个命令在执行Redis的时间,一个消息被发送到Redis的服务器,并且如果客户端是同步的,它的块等待答复。 因此超越了命令本身的成本,你将支付网络往返或IPC。

在现代硬件上,相对于其他运营网络往返或工控机是惊人地昂贵。 这是由于以下几个因素:

  • 介质的原始延迟(主要用于网络)
  • 所述操作系统调度程序的等待时间(不保证在Linux / Unix)
  • 内存高速缓存未命中是昂贵的,并且在客户端和服务器进程在输入/输出计划缓存的概率增大错过。
  • 在高端盒,NUMA副作用

现在,让我们来回顾的结果。

通过比较发电机和利用函数调用的一个实现,它们不会产生相同数量的往返操作的Redis的。 随着发电机你只需要:

    while time.time() - t - expiry < 0:
        yield r.get(fpKey)

所以每次迭代1个往返。 使用功能,您可以:

if r.exists(fpKey):
    return r.get(fpKey)

所以每次迭代2次往返。 难怪生成速度更快。

当然,你应该重用以获得最佳性能相同的Redis连接。 没有一点要运行的系统连接/断开的基准。

最后,关于Redis的之间的性能差异通话和文件读取,你只是一个本地电话比较到远程之一。 文件读取由OS文件系统缓存,所以它们的内核和Python之间快速的内存传输操作。 没有磁盘I / O这里涉及。 随着Redis的,你必须支付往返的成本,所以它会非常慢。



文章来源: Performance of Redis vs Disk in caching application