I've seen a few examples out there but not been able to work them to my situation.
I have a script that calls a long running command, but I want to periodically (say every 1s) get the status of that call. For example:
#!/bin/bash
curl localhost:9200/my_index/_forcemerge?max_num_segments=2 &
while [ command is running ]; do
curl -XGET localhost:9200/_cat/shards/my_index?v&h=index,shard,prirep,segments.count
sleep 1
done
echo "finished!"
Is it possible to get the status of the child process in this way?
Edit: Clarifying what I'm actually doing. It's actually two curl commands to an Elasticsearch cluster. The long running command merges data segments together, the "status" command will get the current segment count.
I think that the safest way of doing this is to save the process ID of the child process and then periodically check to see if this is still running:
The variable
$!
will hold the process ID of the last process started in the background.The
kill -0
will not send a signal to the process, it only makekill
return with a zero exit status if the given process ID exists and belongs to the user executingkill
.One could come up with a solution using
pgrep
too, but that will probably be a bit more "unsafe" in the sense that care must be taken not to catch any similar running processes.