I don't think it's clear to me yet, is it faster to read things from a file or from memcached? Why?
相关问题
- Faster loop: foreach vs some (performance of jsper
- Why wrapping a function into a lambda potentially
- How to specify memcache server to Rack::Session::M
- Ado.net performance:What does SNIReadSync do?
- Device support warning : Google play 2019
相关文章
- Is there a google API to read cached content? [clo
- DOM penalty of using html attributes
- Which is faster, pointer access or reference acces
- Why is file_get_contents faster than memcache_get?
- Django is sooo slow? errno 32 broken pipe? dcramer
- What file sytems support Java UserDefinedFileAttri
- Transactionally writing files in Node.js
- Understanding the difference between Collection.is
Cache Type | Cache Gets/sec
Array Cache | 365000
APC Cache | 98000
File Cache | 27000 Memcached Cache (TCP/IP) | 12200
MySQL Query Cache (TCP/IP) | 9900
MySQL Query Cache (Unix Socket) | 13500
Selecting from table (TCP/IP) | 5100
Selecting from table (Unix Socket) | 7400
Source:
https://surniaulula.com/os/unix/memcached-vs-disk-cache/
Source of my source :)
https://www.percona.com/blog/2006/08/09/cache-performance-comparison/
It depends if the cache is stored locally. Memcache can store data across a network, which isn't necessarily faster than a local disk.
In fact, it is not as simple as that reading from memory is much faster than reading from HDD. As you known, Memcached is based on tcp connection, if you make connection each time you want to get sth or set sth to memcached server(that is most of programmers do), it proberly performs poorly than using file cache. You should use static Memcached object, and reuse the object. Secondly, the modern OS's will cached files that are frequently used, that makes file caches might be faster than memcaches which are actualy TCP connections.
"Faster" can not be used without context. For example, accessing data in memcached on remote server can be "slower" due to network latency. In the other hand, reading data from remote server memory via 10Gb network can be "faster" than reading same data from local disk.
The main difference between caching on the filesystem and using memcached is that memcached is a complete caching solution. So there is LRU lists, expiration concept (data freshness), some high-level operations, like cas/inc/dec/append/prepend/replace.
Memcached is easy to deploy and monitor (how can we distinguish "cache" workload on filesystem from, let's say kernel? Can we calculate total amount of cached data? Data distribution? Capacity planning? And so on).
There are also some hybrid systems, like cachelot Basically, it's memcached that can be embedded right into the application, so the cache would be accessible without any syscalls or network IO.
Memcached is faster, but the memory is limited. HDD is huge, but I/O is slow compared to memory. You should put the hottest things to memcached, and all the others can go to cache files.
(Or man up and invest some money into more memory like these guys :)
For some benchmarks see: Cache Performance Comparison (File, Memcached, Query Cache, APC)
In theory:
http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf
You're being awefully vauge on the details. And I believe the answer your looking for depends on the situtation. To my knowledge very few things tend to be better than the other all the time.
Obviously it woudln't be faster to read things of the file system (assuming that it's a harddrive). Even a SDD will be noticably slower than in-memory reads. And the reason for that is that HDD/FileSystem is built for capacity not speed, while DDR memory is particulary fast for that reason.
Good caching means to keep frequently accessed parts in memory and the not so frequently accessed things on disk (persistent storage). That way the normal case would be vastly improved by your caching implementation. That's your goal. Make sure you have a good understanding of your ideal caching policy. That will require extensive benchmarking and testing.