I've got a question concerning multiple masters in a replicaSet with MongoDB. I have the following layout:
Server A --> with MongoDB & several applications
Server B --> with MongoDB & several applications
Both instances of MongoDB are organised in the same replica Set (Server A as Primary, Server B as Secondary). But now there is the problem. Both databases should contain data from the applications on the server.
Is it possible to deploy a replica Set with two masters so that the data from Server A is available in MongoDB at Server B and vice versa?
Thank you very much in advance
Replica sets in MongoDB can only have a single master at this point. (It is called the primary of a replica set.) For your scenario, the solution is often to use a sharded cluster. In your example, you would have two shards: one for the data of server A, and the other for the data of server B. Both shards are implemented as replica sets, so each has a minimum of three servers. You would then put the primary of the A shard in the same data center A, and the primary of the B shard in data center B. At least one replica of each shard (called a secondary) would be located in the other data center.
This means that all the data is available in each data center, but writes to the A shard always need to happen in data center A, and writes to the B shard in data center B. (Although writes can also be done remotely, so you can write to shard A from data center B, it's just that it's a remote write in this case.)
No MongoDB is single master only.
The only way to create two separate replicas like this currently while keeping them in sync is to do it manually but that is not advised.
"Both instances of MongoDB are organised in the same replica Set"
"Is it possible to deploy a replica Set with two masters so that the data from Server A is available in MongoDB at Server B and vice versa?"
Weird question, replication IS indeed intended to serve exactly that purpose: to store the same data redundantly on different servers.
If you have it set up in place, you have ALREADY achieved your goal. Sharding has little (if not nothing) to do with high availability.
You don't need multimaster configuration, if you have auto-failover working. For that, make sure that you have at least 3 data bearing replica members or 2+one arbiter, for them to be able to form majority and elect a new master in case if the old one goes offline.
You might want to adjust electionTimeoutMillis parameter for autofailover checks to happen more frequently.
When writing important data to DB, you can use { w: "majority" } write concern to make sure your changes have been reflected on majority of data bearing servers and are therefore durable.