append output of multiple curl requests to a file

2019-04-16 18:42发布

I'm trying to fetch the JSON output by an internal API and add 100 to a parameter value between cURL requests. I need to loop through because it restricts the maximum number of results per request to 100. I was told to "increment and you should be able to get what you need".

Anyway, here's what I wrote:

#!/bin/bash

COUNTER=100
until [ COUNTER -gt 30000 ]; do
    curl -vs "http://example.com/locations/city?limit=100&offset=$COUNTER" >> cities.json
    let COUNTER=COUNTER+100
done

The problem is that I get a bunch of weird messages in the terminal and the file I'm trying to redirect the output too still contains it's original 100 objects. I feel like I'm probably missing something terrifically obvious. Any thoughts? I did use a somewhat old tutorial on the until loop, so maybe it's a syntax issue?

Thank you in advance!

EDIT: I'm not opposed to a completely alternate method, but I had hoped this would be somewhat straightforward. I figured my lack of experience was the main limiter.

标签: bash shell curl
2条回答
beautiful°
2楼-- · 2019-04-16 19:14

You might find you can do this faster, and pretty easily with GNU Parallel:

parallel -k curl -vs "http://example.com/locations/city?limit=100\&offset={}" ::: $(seq 100 100 30000) > cities.json
查看更多
Melony?
3楼-- · 2019-04-16 19:22

If you want to overwrite the file's content only once, for your entire loop...

#!/bin/bash
# ^-- NOT /bin/sh, as this uses bash-only syntax

for (( counter=100; counter<=30000; counter+=100 )); do
    curl -vs "http://example.com/locations/city?limit=100&offset=$counter"
done >cities.json

This is actually more efficient than putting >>cities.json on each curl command, as it only opens the output file once, and has the side effect (which you appear to want) of clearing the file's former contents when the loop is started.

查看更多
登录 后发表回答