Why is the Simple Least Recently Used Cache Mechan

2019-06-19 02:40发布

问题:

I am using JProfiler to inspect a Java microservice while I simulate concurrent users with JMeter. With JProfiler I can see: Navigating to the method find(), I realized the method has synchronized keyword

In my opinion this method causes the problem with blocked threads. But why is it used? May I disabled this cache mechanism from microservice? The microservice is written in Java and it uses Spring, Spring Boot.

Thank you

I added screenshot from the same JProfiler snapshot for Monitor History to show the time spent in the ResolvedTypeCache class. Sometimes the time is less but sometimes is huge.

回答1:

Your conclusion seems very wrong to me, especially when you imply that this is either bad or that there is a potential dead-lock.

The fact that there are synchronized methods inside that class is no indication of dead-locks. It's just the fact that there are multiple threads that wait on a single Lock - this is what synchronized does after all. Also look at those times, those look like micro-seconds, and the most that threads stay there is 4000 which is about 4ms - not that much.

Since this is an internal library, there is not much you can do about it, may be suggest them to implement a ConcurrentHashMap that will improve performance or better make a patch yourself.



回答2:

Why is LRU used? Presumably because there's something worth caching.

Why is it synchronized? Because the LinkedHashMap that's being used as a cache here is not thread-safe. It does provide the idiomatic LRU mechanism though.

It could be replaced with a ConcurrentMap to mitigate the synchronization, but then you'd have a constantly growing non-LRU cache and that's not at all the same thing.

Now there's not much you can do about it. The best idea might be to contact the devs and let them know about this. All in all the library may just not be suitable for the amount of traffic your putting through it, or you may be simulating the kind of traffic that would exhibit pathological behaviour, or you may overestimate the impact of this (not offense, I'm just very Mulderesque about SO posts, i.e. "trust no1").

Finally, uncontested synchronization is cheap so if there's a possibility to divide traffic to multiple instances of the cache it may affect performance in some way (not necessarily positive). I don't know about the architecture of the library though, so it may be completely impossible.