I am tring to extract the domain names out of a list of urls. just like in
https://stackoverflow.com/questions/18331948/extract-domain-name-from-the-url
My problem is that the urls can be about everything, few examples:
m.google.com
=> google
m.docs.google.com
=> google
www.someisotericdomain.innersite.mall.co.uk
=> mall
www.ouruniversity.department.mit.ac.us
=> mit
www.somestrangeurl.shops.relevantdomain.net
=> relevantdomain
www.example.info
=> example
And so on..
The diversity of the domains doesn't allow me to use a regex as shown in how to get domain name from URL (because my script will be running on real time network traffic, the regex will have to be enormous in order to catch all kinds of domains as mentioned).
Unfortunately my web research the didn't provide any efficient solution.
Does anyone have an idea of how to do this ?
Any help will be appreciated !
Thank you
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
Use tldextract
which is more efficient version of urlparse
, tldextract
accurately separates the gTLD
or ccTLD
(generic or country code top-level domain) from the registered domain
and subdomains
of a URL.
>>> import tldextract
>>> ext = tldextract.extract('http://forums.news.cnn.com/')
ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')
>>> ext.domain
'cnn'
回答2:
It seems you can use urlparse https://docs.python.org/3/library/urllib.parse.html for that url, and then extract the netloc.
And from the netloc you could easily extract the domain name by using split
回答3:
With regex, you could use something like this:
(?<=\.)([^.]+)(?:\.(?:co\.uk|ac\.us|[^.]+(?:$|\n)))
https://regex101.com/r/WQXFy6/5
Notice, you'll have to watch out for special cases such as co.uk
.