How to atomically update a counter shared between

2019-04-04 09:27发布

I have a simple C++ service (API endpoint) that increases a counter every time the API is called. When the caller posts data to http://10.0.0.1/add the counter has to be incremented by 1 and return the value of the counter to the caller.

Things get more complicated when the service is getting dockerized. When two instances of the same service run the addition has to be done atomically, ie the counter value is stored in a database and each docker instance has to acquire a lock get the old value, add one, return to the caller and unlock.

When the instances are processes in the same Linux machine, we used shared memory to efficiently lock, read, write and unlock the shared data and the performance was accepted. However when we use dockers and a database the performance is low. The results are OK, however the performance is low.

What is the canonical way between instances of dockerized properties to perform operations like the one described above? Is there a "shared memory" feature for containerized processes?

3条回答
Evening l夕情丶
2楼-- · 2019-04-04 09:43

It looks like for your case database is overhead. You just need some distribute lightweight key-value storage with shared key lock support. Here are some candidates:

查看更多
爷、活的狠高调
3楼-- · 2019-04-04 09:57

The --ipc option of docker run enables shared-memory access between containers:

IPC settings (--ipc)

--ipc="" : Set the IPC mode for the container,

'container:<name|id>': reuses another container's IPC namespace

'host': use the host's IPC namespace inside the container

By default, all containers have the IPC namespace enabled.

IPC (POSIX/SysV IPC) namespace provides separation of named shared memory segments, semaphores and message queues.

Shared memory segments are used to accelerate inter-process communication at memory speed, rather than through pipes or through the network stack. Shared memory is commonly used by databases and custom-built (typically C/OpenMPI, C++/using boost libraries) high performance applications for scientific computing and financial services industries. If these types of applications are broken into multiple containers, you might need to share the IPC mechanisms of the containers.

This article provides some demonstration of its usage.

查看更多
Juvenile、少年°
4楼-- · 2019-04-04 10:04

I was facing a similar problem, and decided to dig in it head on.

The only thing that goes fast is domain sockets. So i created a small c-program that listens on a domain socket, on a shared volume /sockets.

see a working concept on test on gitlab.com.

counter.c does the job, listens on sockets/count.sock and on receipt of a single char in a datagram:

  • '+' : it will increase the count and return the count as an u_int64_t
  • '0' : resets the count and return the 0 value as an u_int64_t
  • '=' : returns the count as an u_int64_t, without increment
  • '-' : decrement count with one and returns count as an u_int64_t

for concept testing:

  • counter --interval=1000000 => starts the counter
  • test_counter --repeats=100000 stress => sents 100k request to the socket
  • test_counter reset set's counter to 0
  • test_counter --quiet --strip result returns the counter without \n
  • test_counter [count] increments the counter and returns result.

2 docker containers are build: count & test repo

and to test, I used docker-compose.yml in a gitlab-runner:

my-test:
    image: dockregi.gioxa.com/dgoo2308/dockersocket:test
    links:
    - counter
    entrypoint:
    - /test_counter
    - --repeats=${REPEATS}
    - --timeout=200
    - stress
volumes:
- './sockets:/sockets'

counter:
    image: dockregi.gioxa.com/dgoo2308/dockersocket:count
    volumes:
    - './sockets:/sockets'
    entrypoint:
    - /counter
    - --start_delay=100
    - --interval=${TARGET}

to start test:

mkdir sockets
docker-compose pull --parallel
docker-compose up -d
docker-compose scale my-test=$SCALE

Concept Test Success full!!! see test Job

cavecat:

for client implementation, client socket cannot be bind as auto, but need to be given a name, see in test we use the hostname, mapped in the same /sockets volume. Also they need to be different for each client.

查看更多
登录 后发表回答