Splitting / chunking JSON files with JQ in Bash or

2019-07-08 05:08发布

I have been using the wonderful JQ library to parse and extract JSON data to facilitate re-importing. I am able to extract a range easily enough, but am unsure as to how you could loop through in a script and detect the end of the file, preferably in a bash or fish shell script.

Given a JSON file that is wrapped in a "results" dictionary, how can I detect the end of the file?

From testing, I can see that I will get an empty array nested in my desired structure, but how can you detect the end of file condition?:

jq '{ "results": .results[0:500] }' Foo.json > 0000-0500/Foo.json

Thanks!

1条回答
三岁会撩人
2楼-- · 2019-07-08 05:36

I'd recommend using jq to split-up the array into a stream of the JSON objects you want (one per line), and then using some other tool (e.g. awk) to populate the files. Here's how the first part can be done:

def splitup(n):
  def _split:
    if length == 0 then empty
    else .[0:n], (.[n:] | _split)
    end;
  if n == 0 then empty elif n > 0 then _split else reverse|splitup(-n) end;

# For the sake of illustration:
def data: { results: [range(0,20)]};

data | .results | {results: splitup(5) }

Invocation:

$ jq -nc -f splitup.jq
{"results":[0,1,2,3,4]}
{"results":[5,6,7,8,9]}
{"results":[10,11,12,13,14]}
{"results":[15,16,17,18,19]}

For the second part, you could (for example) pipe the jq output to:

  awk '{ file="file."++n; print > file; close(file); }'

A variant you might be interested in would have the jq filter emit both the filename and the JSON on alternate lines; the awk script would then read the filename as well.

查看更多
登录 后发表回答