The Java limitation of MappedByteBuffer to 2GIG make it tricky to use for mapping big files. The usual recommended approach is to use an array of MappedByteBuffer and index it through:
long PAGE_SIZE = Integer.MAX_VALUE;
MappedByteBuffer[] buffers;
private int getPage(long offset) {
return (int) (offset / PAGE_SIZE)
}
private int getIndex(long offset) {
return (int) (offset % PAGE_SIZE);
}
public byte get(long offset) {
return buffers[getPage(offset)].get(getIndex(offset));
}
this can be a working for single bytes, but requires rewriting a lot of code if you want to handle read/writes that are bigger and require crossing boundaries (getLong() or get(byte[])).
The question: what is your best practice for these kind of scenarios, do you know any working solution/code that can be re-used without re-inventing the wheel?