I want to catch a specific http error and not any one of the entire family..
what I was trying to do is --
import urllib2
try:
urllib2.urlopen("some url")
except urllib2.HTTPError:
<whatever>
but what I end up is catching any kind of http error, but I want to catch only if the specified webpage doesn't exist!! probably that's HTTP error 404..but I don't know how to specify that catch only error 404 and let the system run the default handler for other events..ny suggestions??
Just catch urllib2.HTTPError
, handle it, and if it's not Error 404, simply use raise
to re-raise the exception.
See the Python tutorial.
So you could do:
import urllib2
try:
urllib2.urlopen("some url")
except urllib2.HTTPError as err:
if err.code == 404:
<whatever>
else:
raise
For Python 3.x
import urllib.request
try:
urllib.request.urlretrieve(url, fullpath)
except urllib.error.HTTPError as err:
print(err.code)
Tims answer seems to me as misleading. Especially when urllib2 does not return expected code. For example this Error will be fatal (believe or not - it is not uncommon one when downloading urls):
AttributeError: 'URLError' object has no attribute 'code'
Fast, but maybe not the best solution would be code using nested try/except block:
import urllib2
try:
urllib2.urlopen("some url")
except urllib2.HTTPError, err:
try:
if err.code == 404:
# Handle the error
else:
raise
except:
...
More information to the topic of nested try/except blocks Are nested try/except blocks in python a good programming practice?