可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm working on a Java server that handles a LOT of very dense traffic. The server accepts packets from clients (often many megabytes) and forwards them to other clients. The server never explicitly stores any of the incoming/outgoing packets. Yet the server continually runs into OutOfMemoryException
exceptions.
I added System.gc()
into the message passing component of the server, hoping that memory would be freed. Additionally, I set the heap size of the JVM to a gigabyte. I'm still getting just as many exceptions.
So my question is this: how can I make sure that the megabyte messages aren't being queued indefinitely (despite not being needed)? Is there a way for me to call "delete" on these objects to guarantee they are not using my heap space?
try
{
while (true)
{
int r = generator.nextInt(100);//generate a random number between 0 and 100
Object o =readFromServer.readObject();
sum++;
// if the random number is larger than the drop rate, send the object to client, else
//it will be dropped
if (r > dropRate)
{
writeToClient.writeObject(o);
writeToClient.flush();
numOfSend++;
System.out.printf("No. %d send\n",sum);
}//if
}//while
}//try
回答1:
Looking at your code: are your ObjectInput/OutputStream
instances newly created each time a packet arrives or is sent, and if so, are they closed properly? If not, do you call reset()
after each read/write? The object stream classes keep a reference to all objects they have seen (in order to avoid resending the same object each time it is referred), preventing them from being garbage collected. I had that exact problem about 10 years ago - actually the first time I had to use a profiler to diagnose a memory leak...
回答2:
Object streams hold references to every object written/read from them. This is because the serialization protocol allows back references to objects that appeared earlier in the stream. You might be able to still use this design but use writeUnshared/readUnshared instead of writeObject/readObject. I think, but am not sure, that this will prevent the streams from keeping a reference to the object.
As Cowan says, the reset()
method is also in play here. The safest thing to do is probably use writeUnshared
immediately followed by reset()
when writing to your ObjectOutputStream
s
回答3:
When JVM is on an edge of OutOfMemoryError
, it will run the GC.
So calling System.gc()
yourself beforehand ain't going to fix the problem. The problem is to be fixed somewhere else. There are basically two ways:
- Write memory efficient code and/or fix memory leaks in your code.
- Give JVM more memory.
Using a Java Profiler may give a lot of information about memory usage and potential memory leaks.
Update: as per your edit with more information about the code causing this problem, have a look at Geoff Reedy's answer in this topic which suggests to use ObjectInputStream#readUnshared()
and ObjectOutputStream#writeUnshared()
instead. The (linked) Javadocs also explains it pretty well.
回答4:
System.gc() is only a recommendation to the Java Virtual Machine. You call it and the JVM may or may not run the garbage collection.
The OutOfMemoryException may be caused by two things. Either you keep (unwanted) references to your objects or you are accepting to many packets.
The first case can be analyzed by a profiler, where you try to find out how many references are still live. A good indication for a memory leek is growing memory consumption of your server. If every additional request makes your Java process grow a little, chances are you are keeping references somewhere (jconsole might be a good start)
If you are accepting more data than than you can handle, you will have to block additional requests until others are completed.
回答5:
You can't call explicit garbage collection. But this is not the problem here. Perhaps you are storing references to these messages. Trace where they are handled and make sure no object holds reference to them after they are used.
To get a better idea of what the best practices are, read Effective Java, chapter 2 - it's about "Creating and Destroying Objects"
回答6:
You cannot explicitly force deletion, but you CAN ensure that references to messages are not held by only keeping one direct reference in memory, and then using Reference objects to hold garbage-collectible references to it.
What about using a (small, bounded-size) queue for messages to process, then a secondary SoftReference queue which feeds to the first queue? This way you guarantee that processing will proceed BUT also that you won't get out of memory errors if messages are too big (the reference queue will get dumped in that case).
回答7:
You can tune garbage collection in java, but you cannot force.
回答8:
If you're getting OutOfMemory exceptions, something is clearly still holding a reference to these objects. You can use a tool such as jhat to find out where these references are sticking around.
回答9:
You need to find out if you are holding onto objects longer than necessary. The first step would be to get a profiler on the case and look at the heap and see why objects aren't being collected.
Although you've given the JVM 1GB, it may be that your young generation is too small if lots of objects are being created very quickly forcing them into older generations where they won't be removed as quickly.
Some useful info on GC tuning:
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html
回答10:
The server accepts packets from
clients (often many megabytes) and
forwards them to other clients.
Your code probably receives the "packets" completely before forwarding them. This means it needs enough memory to store all packets entirely until they've been forwarded completely, and when those packets are "many megabytes large" that means you need a lot of memory indeed. it also results in unnecessary latency.
It's possible that you have a memory leak as well, but if the above is true, this "store and forward" design is your biggest problem. You can probably cut memory usage by 95% if you redesign the app to not receive packets completely and instead stream them directly to the clients, i.e. read only a small part of the package at a time and transmit that to the clients immediately. It's not difficult to do this in a way that looks exactly the same to the clients as when you do store-and-forward.
回答11:
Manually triggering System.gc is not a good answer, as others have posted here. It's not guaranteed to run, and it triggers a full gc, which is likely to hang your server for a long time while it runs(>1 sec if you're giving your server a GB of ram, I've seen several-minute long pauses on larger systems). You could tune your gc which will certainly help, but not completely fix the problem.
If you're reading objects from one stream, and then writing them out to another, Then there's a point in which you're holding the entire object in memory. If these objects are, as you state, large, then that could be your problem. Try to rewrite your IO so that you read bytes from 1 stream and write them to another without ever explicitly holding the complete object (although I can't see how this would work with object serialization/deserialization if you need to verify/validate the objects).
回答12:
just to add to all those previous replies : System.gc() is not a command to the JVM to initiate garbage collection..it is a meek direction and does not guarantee anything to happen. The JVM specification leaves it to the vendors to take a call on what needs to be done on gc calls. Vendors may even choose to do nothing at all!
回答13:
You mention you explicitly need the whole received packet before you can send it? Well, that doesn't mean you need to store it all in memory, does it? Is it a feasible architectural change to save received packets to an external store (maybe ram-disk or DB if even an SSD is too slow) and then pipe them directly to the recipient without ever loading them fully into memory?
回答14:
If your server runs for at least a few minutes before it dies, you might want to try running it in the Visual VM. You might at least get a better idea of how fast the heap is growing, and what kind of objects are in it.