I'm developing a Java REST API that uses client data from a postgreSQL database.
The numbers: . About 600 clients at the beginning . Some of them doing requests every few seconds
Because clients pay per request, we need to control if their number of successful requests reach their limit, and as querying postgresql data (update the value of 'hitsCounter' field) after every request is bad in terms of performance, we are thinking about implementing a cache system with redis.
The idea: After a client does his first request, we retrieve his data from postgresql and store it into redis cache. Then work with this cache-data, for example incrementing the 'hitsCounter' key value, till the client stops doing requests. In parallel, every few minutes a background process persist data from redis cache to db tables, so at the end we have the updated data back to postgresql, and we can deal with them in the future.
I think it obviously increase performance, but I'm not sure about this "background process". An option is to check the TTL of the cache elements and if it's minor than some value (it means client has finished doing requests), persist the data.
I would love to hear some opinions about this. Is this a good idea? Do you know some better alternatives?