Memory-mapped files in Java

2020-02-09 12:20发布

问题:

I've been trying to write some very fast Java code that has to do a lot of I/O. I'm using a memory mapped file that returns a ByteBuffer:

public static ByteBuffer byteBufferForFile(String fname){
    FileChannel vectorChannel;
    ByteBuffer vector;
    try {
        vectorChannel = new FileInputStream(fname).getChannel();
    } catch (FileNotFoundException e1) {
        e1.printStackTrace();
        return null;
    }
    try {
        vector = vectorChannel.map(MapMode.READ_ONLY,0,vectorChannel.size());
    } catch (IOException e) {
        e.printStackTrace();
        return null;
    }
    return vector;
}

The problem that I'm having is that the ByteBuffer .array() method (which should return a byte[] array) doesn't work for read-only files. I want to write my code so that it will work with both memory buffers constructed in memory and buffers read from the disk. But I don't want to wrap all of my buffers a ByteBuffer.wrap() function because I'm worried that this will slow things down. So I've been writing two versions of everything, one that takes a byte[], the other that takes a ByteBuffer.

Should I just wrap everything? Or should I double-write everything?

回答1:

Did anyone actually check to see if ByteBuffers created by memory mapping support invoking .array() in the first place, regardless of readonly/readwrite?

From my poking around, as far as I can tell, the answer is NO. A ByteBuffer's ability to return a direct byte[] array via ByteBuffer.array() is goverened by the presence of ByteBuffer.hb (byte[]), which is always set to null when a MappedByteBuffer is created.

Which kinda sucks for me, because I was hoping to do something similar to what the question author wanted to do.



回答2:

Its always good not to reinvent the wheels. Apache has provided a beautiful library for performing I/O operations. Take a look at http://commons.apache.org/io/description.html

Here's the scenario it serves. Suppose you have some data that you'd prefer to keep in memory, but you don't know ahead of time how much data there is going to be. If there's too much, you want to write it to disk instead of hogging memory, but you don't want to write to disk until you need to, because disk is slow and is a resource that needs tracking for cleanup.

So you create a temporary buffer and start writing to that. If / when you reach the threshold for what you want to keep in memory, you'll need to create a file, write out what's in the buffer to that file, and write all subsequent data to the file instead of the buffer.

That's what DeferredOutputStream does for you. It hides all the messing around at the point of switch-over. All you need to do is create the deferred stream in the first place, configure the threshold, and then just write away to your heart's content.

EDIT: I just did a small re-search using google and found this link: http://lists.apple.com/archives/java-dev/2004/Apr/msg00086.html (Lightning fast file read/write). Very impressive.



回答3:

Wrapping byte[] won't slow things down...there won't be any huge array copies or other little performance evils. From the JavaDocs: java.nio.ByteBuffer .wrap()

Wraps a byte array into a buffer.

The new buffer will be backed by the the given byte array; that is, modifications to the buffer will cause the array to be modified and vice versa. The new buffer's capacity and limit will be array.length, its position will be zero, and its mark will be undefined. Its backing array will be the given array, and its array offset will be zero.



回答4:

Using the ByteBuffer.wrap() functionality does not impose a high burden. It allocates a simple object and initializes a few integers. Writing your algorithm against ByteBuffer is thus your best bet if you need to work with read only files.