I want to use ZODB with as little caching as possible. For this, I'm creating ZODB database instance and opening it like this:
db = DB('/home/me/example.db', cache_size=1, cache_size_bytes=1)
db_conn = db.open_then_close_db_when_connection_closes()
db_conn
is the only connection of db
. I'm verifying that both its target cache size parameters are set by checking db_conn._cache.cache_size
and db_conn._cache.cache_size_bytes
, which evaluate to 1
each.
In the database, I store lots (could be billions and more) of Persistent objects in one OOBTree. When I'm reading them (in batches) from the database, my memory usage grows. Calling db_conn.cacheMinimize()
after each (batch) read prevents memory usage from growing, but I want ZODB not to cache the objects in the first place (as opposed to me forcing it to remove cached objects from memory).
I am monitoring database cache status right before and right after each cacheMinimize()
call using cacheDetail()
and cacheDetailSize()
like this:
cache_status_before = {'detail': db_conn.db().cacheDetail(),
'detail size': db_conn.db().cacheDetailSize()}
db_conn.cacheMinimize()
cache_status_after = {'detail': db_conn.db().cacheDetail(),
'detail size': db_conn.db().cacheDetailSize()}
print('{} -> {}'.format(cache_status_before, cache_status_after))
A typical output produced by the above lines is (Simulation is the class of my objects, inherited from Persistent):
{'detail': [('BTrees.OOBTree.OOBucket', 62), ('boolsi.simulate.Simulation', 1758)],
'detail size': [{'connection': '<Connection at 7fe9340966a0>', 'ngsize': 933, 'size': 1820}]}
->
{'detail': [('BTrees.OOBTree.OOBucket', 3), ('boolsi.simulate.Simulation', 1748)],
'detail size': [{'connection': '<Connection at 7fe9340966a0>', 'ngsize': 0, 'size': 1751}]}
From my understanding, this output shows that both target cached object count and target cache memory size are ignored by ZODB, since it caches more than 1 object (and definitely exceeding 1 byte). Any ideas why?