可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
For some caching I'm thinking of doing for an upcoming project, I've been thinking about Java serialization. Namely, should it be used?
Now I've previously written custom serialization and deserialization (Externalizable) for various reasons in years past. These days interoperability has become even more of an issue and I can foresee a need to interact with .Net applications so I've thought of using a platform-independant solution.
Has anyone had any experience with high-performance use of GPB? How does it compare in terms of speed and efficiency with Java's native serialization? Alternatively, are there any other schemes worth considering?
回答1:
I haven't compared Protocol Buffers with Java's native serialization in terms of speed, but for interoperability Java's native serialization is a serious no-no. It's also not going to be as efficient in terms of space as Protocol Buffers in most cases. Of course, it's somewhat more flexible in terms of what it can store, and in terms of references etc. Protocol Buffers is very good at what it's intended for, and when it fits your need it's great - but there are obvious restrictions due to interoperability (and other things).
I've recently posted a Protocol Buffers benchmarking framework in Java and .NET. The Java version is in the main Google project (in the benchmarks directory), the .NET version is in my C# port project. If you want to compare PB speed with Java serialization speed you could write similar classes and benchmark them. If you're interested in interop though, I really wouldn't give native Java serialization (or .NET native binary serialization) a second thought.
There are other options for interoperable serialization besides Protocol Buffers though - Thrift, JSON and YAML spring to mind, and there are doubtless others.
EDIT: Okay, with interop not being so important, it's worth trying to list the different qualities you want out of a serialization framework. One thing you should think about is versioning - this is another thing that PB is designed to handle well, both backwards and forwards (so new software can read old data and vice versa) - when you stick to the suggested rules, of course :)
Having tried to be cautious about the Java performance vs native serialization, I really wouldn't be surprised to find that PB was faster anyway. If you have the chance, use the server vm - my recent benchmarks showed the server VM to be over twice as fast at serializing and deserializing the sample data. I think the PB code suits the server VM's JIT very nicely :)
Just as sample performance figures, serializing and deserializing two messages (one 228 bytes, one 84750 bytes) I got these results on my laptop using the server VM:
Benchmarking benchmarks.GoogleSize$SizeMessage1 with file google_message1.dat
Serialize to byte string: 2581851 iterations in 30.16s; 18.613789MB/s
Serialize to byte array: 2583547 iterations in 29.842s; 18.824497MB/s
Serialize to memory stream: 2210320 iterations in 30.125s; 15.953759MB/s
Deserialize from byte string: 3356517 iterations in 30.088s; 24.256632MB/s
Deserialize from byte array: 3356517 iterations in 29.958s; 24.361889MB/s
Deserialize from memory stream: 2618821 iterations in 29.821s; 19.094952MB/s
Benchmarking benchmarks.GoogleSpeed$SpeedMessage1 with file google_message1.dat
Serialize to byte string: 17068518 iterations in 29.978s; 123.802124MB/s
Serialize to byte array: 17520066 iterations in 30.043s; 126.802376MB/s
Serialize to memory stream: 7736665 iterations in 30.076s; 55.93307MB/s
Deserialize from byte string: 16123669 iterations in 30.073s; 116.57947MB/s
Deserialize from byte array: 16082453 iterations in 30.109s; 116.14243MB/s
Deserialize from memory stream: 7496968 iterations in 30.03s; 54.283176MB/s
Benchmarking benchmarks.GoogleSize$SizeMessage2 with file google_message2.dat
Serialize to byte string: 6266 iterations in 30.034s; 16.826494MB/s
Serialize to byte array: 6246 iterations in 30.027s; 16.776697MB/s
Serialize to memory stream: 6042 iterations in 29.916s; 16.288969MB/s
Deserialize from byte string: 4675 iterations in 29.819s; 12.644595MB/s
Deserialize from byte array: 4694 iterations in 30.093s; 12.580387MB/s
Deserialize from memory stream: 4544 iterations in 29.579s; 12.389998MB/s
Benchmarking benchmarks.GoogleSpeed$SpeedMessage2 with file google_message2.dat
Serialize to byte string: 39562 iterations in 30.055s; 106.16416MB/s
Serialize to byte array: 39715 iterations in 30.178s; 106.14035MB/s
Serialize to memory stream: 34161 iterations in 30.032s; 91.74085MB/s
Deserialize from byte string: 36934 iterations in 29.794s; 99.98019MB/s
Deserialize from byte array: 37191 iterations in 29.915s; 100.26867MB/s
Deserialize from memory stream: 36237 iterations in 29.846s; 97.92251MB/s
The "speed" vs "size" is whether the generated code is optimised for speed or code size. (The serialized data is the same in both cases. The "size" version is provided for the case where you've got a lot of messages defined and don't want to take a lot of memory for the code.)
As you can see, for the smaller message it can be very fast - over 500 small messages serialized or deserialized per millisecond. Even with the 87K message it's taking less than a millisecond per message.
回答2:
One more data point: this project:
http://code.google.com/p/thrift-protobuf-compare/
gives some idea of expected performance for small objects, including Java serialization on PB.
Results vary a lot depending on your platform, but there are some general trends.
回答3:
If you are confusing between PB & native java serialization on speed and efficiency, just go for PB.
- PB was designed to achieve such factors. See http://code.google.com/apis/protocolbuffers/docs/overview.html
- PB data is very small while java serialization tends to replicate a whole object, including its signature. Why I always get my class name, field name... serialized, even though I know it inside out at receiver?
- Think about across language development. It's getting hard if one side uses Java, one side uses C++...
Some developers suggest Thrift, but I would use Google PB because "I believe in google" :-).. Anyway, it's worth for a look:
http://stuartsierra.com/2008/07/10/thrift-vs-protocol-buffers
回答4:
You might also have a look at FST, a drop-in replacement for built-in JDK serialization that should be faster and have smaller output.
raw estimations on the frequent benchmarking i have done in recent years:
100% = binary/struct based approaches (e.g. SBE, fst-structs)
- inconvenient
- postprocessing (build up "real" obejcts at receiver side) may eat up performance advantages and is never included in benchmarks
~10%-35% protobuf & derivates
~10%-30% fast serializers such as FST and KRYO
- convenient, deserialized objects can be used most often directly without additional manual translation code.
- can be pimped for performance (annotations, class registering)
- preserve links in object graph (no object serialized twice)
- can handle cyclic structures
- generic solution, FST is fully compatible to JDK serialization
~2%-15% JDK serialization
~1%-15% fast JSon (e.g. Jackson)
- cannot handle any object graph but only a small subset of java data structures
- no ref restoring
0.001-1% full graph JSon/XML (e.g. JSON.io)
These numbers are meant to give a very rough order-of-magnitude impression.
Note that performance depends A LOT on the data structures being serialized/benchmarked. So single simple class benchmarks are mostly useless (but popular: e.g. ignoring unicode, no collections, ..).
see also
http://java-is-the-new-c.blogspot.de/2014/12/a-persistent-keyvalue-server-in-40.html
http://java-is-the-new-c.blogspot.de/2013/10/still-using-externalizable-to-get.html
回答5:
What do you means by high performance? If you want milli-second serialization, I suggest you use the serialization approach which is simplest. If you want sub milli-second you are likely to need a binary format. If you want much below 10 micro-seconds you are likely to need a custom serialization.
I haven't seen many benchmarks for serialization/deserialization but few support less that 200 micro-seconds for serialization/deserialization.
Platform independent formats come at a cost (in effort on your part and latency) you may have to decide whether you want performance or platform independence. However, there is no reason you cannot have both as a configuration option which you switch between as required.
回答6:
Here is the off the wall suggestion of the day :-) (you just tweaked something in my head that I now want to try)...
If you can go for the whole caching solution via this it might work: Project Darkstar. It is designed as very high performance game server, specifically so that reads are fast (so good for a cache). It has Java and C APIs so I believe (thought it has been a long time since I looked at it, and I wasn't thinking of this then) that you could save objects with Java and read them back in C and vice versa.
If nothing else it'll give you something to read up on today :-)
回答7:
For wire-friendly serialisation, consider using the Externalizable interface. Used cleverly, you'll have intimate knowlege to decide how to optimally marshall and unmarshall specific fields. That said, you'll need to manage the versioning of each object correctly - easy to un-marshall, but re-marshalling a V2 object when your code supports V1 will either break, lose information, or worse corrupt data in a way your apps aren't able to correctly process. If you're looking for an optimal path, beware no library will solve your problem without some compromises. Generally libraries will fit most use-cases and will come with the added benefit that they'll adapt and enhance over time without your input, if you've opted for an active open source project. And they might add performance problems, introduce bugs, and even fix bugs that haven't affected you yet!