Sample input1.json
[{
"organizationId": "org1",
"test1": "UP",
"key1": "value1"
},
{
"organizationId": "org2",
"test1": "UP",
"key1": "value3"
}]
Sample input2.json
[{
"organizationId": "org1",
"test2": "DOWN",
"key1": "value4"
},
{
"organizationId": "org3",
"test2": "DOWN",
"key1": "value5"
}]
Expected output.json
[{
"organizationId": "org1",
"test1": "UP",
"key1": "value4",
"test2": "DOWN"
},
{
"organizationId": "org2",
"test1": "UP",
"key1": "value3"
},
{
"organizationId": "org3",
"test2": "DOWN",
"key1": "value5"
}
]
The input is an array object of two files. My objective is to merge two objects if they have same value and leave other objects intact. I partially achieved this by grouping
jq -s '[ .[0] + .[1] | group_by(.organizationId)[] | select(length > 1) |add ]' ìnput1.json input2.json
Group objects by organizationId. In both the inputs, "organizationId": "org1" is available so it can be grouped. Now the problem I'm facing is I'm loosing other objects "organizationId": "org2" from input1.json and "organizationId": "org3" from input2.json which doesn't exist in each others file.
The basic principle of grouping is achieved but I do need to preserve any other objects from both the files even if there is no match. Should we use group_by if we want to preserve other objects? If not, how can I achieve the expected output using jq?
For wider audience, solution to get all objects (grouped & ungrouped), use below
jq -s '[ .[0] + .[1] | group_by(.organizationId)[] |add ]' ìnput1.json input2.json
If only grouped objects, then use below.
jq -s '[ .[0] + .[1] | group_by(.organizationId)[] | select(length > 1) | add ]' ìnput1.json input2.json
I have to say jq is so powerful! :)