I am trying to make a program that can convert a series of manga scans into one pdf file, and I don't want to have to attempt to download the picture to determine if I have the right url. Is there a shell scripting command that I can use to just check if a web page exists?
问题:
回答1:
Under a *NIX, you can use curl
to issue a simple HEAD
request (HEAD
only asks for the headers, not the page body):
curl --head http://myurl/
Then you can take only the first line, which contains the HTTP status code (200 OK, 404 Not Found, etc.):
curl -s --head http://myurl/ | head -n 1
And then check if you got a decent response (status code is 200 or 3**):
curl -s --head http://myurl/ | head -n 1 | grep "HTTP/1.[01] [23].."
This will output the first line if the status code is okay, or nothing if it isn't. You can also pipe that to /dev/null to get no output, and use $?
to determine if it worked or no:
curl -s --head http://myurl/ | head -n 1 | grep "HTTP/1.[01] [23].." > /dev/null
# on success (page exists), $? will be 0; on failure (page does not exist or
# is unreachable), $? will be 1
EDIT -s
simply tells curl
to not show a "progress bar".
回答2:
Use cURL to obtain the status code and check for required values.
status=$(curl -s --head -w %{http_code} http://www.google.com/ -o /dev/null)
echo $status
回答3:
First make sure there is no authorization issue.If any Authorization Required , you provide the username and password .Create a shell script file (checkURL.sh ) and paste the below code.
Hope this will help you.
checkURL.sh
yourURL="http://abc-repo.mycorp.com/data/yourdir"
if curl --output /dev/null --silent --head --fail "$yourURL"
then
echo "This URL Exist"
else
echo "This URL Not Exist"
fi
Its working for me in Nexus and other Repository.
回答4:
You can always just use wget
; I do as the code is simpler.
if [[ $(wget http://url/ -O-) ]] 2>/dev/null
then echo "This page exists."
else echo "This page does not exist."
fi
Using the -O-
option with wget
means that it will try to output the contents of the page, but only if it exists. So if there isn't any output, then the page doesn't exist. The 2>/dev/null
is just to send the output (if there is any) to the trash.
I know it's overdue, but I hope this helps.
回答5:
wget or cURL will do the job. See here wget or cURL for details and download locations. Supply the URL to these command-line tools and check the response.