[Yes, the title is not a typo!]
In python I need something that parses a URL. I cannot believe something standard does not already exist. As the URL is set in a config I'd like to make sure it is not garbage.
There's urlparse.urlparse, but that only parses 'valid URLs' (with some invalid URLs sometimes raising an undocumented ValueError)
e.g.
>>> import urlparse
>>> urlparse.urlparse('http://aa :: aa ! aa:11.com:aa').netloc
'aa :: aa ! aa:11.com:aa'
shows how urlparse parses what I would consider an invalid URL.
URL parsing and URL validation are actually different tasks.
urlparse.urlparse
makes parsing, validation is usually made using regular expressions machine (built-in re
module in Python).
Here's and example of URL validation from Django framework:
regex = re.compile(
r'^(?:http|ftp)s?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' #domain...
r'localhost|' #localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)