Python equivalent of a given wget command

2019-01-17 03:31发布

问题:

I'm trying to create a Python function that does the same thing as this wget command:

wget -c --read-timeout=5 --tries=0 "$URL"

-c - Continue from where you left off if the download is interrupted.

--read-timeout=5 - If there is no new data coming in for over 5 seconds, give up and try again. Given -c this mean it will try again from where it left off.

--tries=0 - Retry forever.

Those three arguments used in tandem results in a download that cannot fail.

I want to duplicate those features in my Python script, but I don't know where to begin...

回答1:

urllib.request should work. Just set it up in a while(not done) loop, check if a localfile already exists, if it does send a GET with a RANGE header, specifying how far you got in downloading the localfile. Be sure to use read() to append to the localfile until an error occurs.

This is also potentially a duplicate of Python urllib2 resume download doesn't work when network reconnects



回答2:

There is also a nice Python module named wget that is pretty easy to use. Found here.

This demonstrates the simplicity of the design:

>>> import wget
>>> url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'
>>> filename = wget.download(url)
100% [................................................] 3841532 / 3841532>
>> filename
'razorback.mp3'

Enjoy.

However, if wget doesn't work (I've had trouble with certain PDF files), try this solution.

Edit: You can also use the out parameter to use a custom output directory instead of current working directory.

>>> output_directory = <directory_name>
>>> filename = wget.download(url, out=output_directory)
>>> filename
'razorback.mp3'


回答3:

import urllib2

attempts = 0

while attempts < 3:
    try:
        response = urllib2.urlopen("http://example.com", timeout = 5)
        content = response.read()
        f = open( "local/index.html", 'w' )
        f.write( content )
        f.close()
        break
    except urllib2.URLError as e:
        attempts += 1
        print type(e)


回答4:

I had to do something like this on a version of linux that didn't have the right options compiled into wget. This example is for downloading the memory analysis tool 'guppy'. I'm not sure if it's important or not, but I kept the target file's name the same as the url target name...

Here's what I came up with:

python -c "import requests; r = requests.get('https://pypi.python.org/packages/source/g/guppy/guppy-0.1.10.tar.gz') ; open('guppy-0.1.10.tar.gz' , 'wb').write(r.content)"

That's the one-liner, here's it a little more readable:

import requests
fname = 'guppy-0.1.10.tar.gz'
url = 'https://pypi.python.org/packages/source/g/guppy/' + fname
r = requests.get(url)
open(fname , 'wb').write(r.content)

This worked for downloading a tarball. I was able to extract the package and download it after downloading.

EDIT:

To address a question, here is an implementation with a progress bar printed to STDOUT. There is probably a more portable way to do this without the clint package, but this was tested on my machine and works fine:

#!/usr/bin/env python

from clint.textui import progress
import requests

fname = 'guppy-0.1.10.tar.gz'
url = 'https://pypi.python.org/packages/source/g/guppy/' + fname

r = requests.get(url, stream=True)
with open(fname, 'wb') as f:
    total_length = int(r.headers.get('content-length'))
    for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1): 
        if chunk:
            f.write(chunk)
            f.flush()


回答5:

easy as py:

class Downloder():
    def download_manager(self, url, destination='Files/DownloderApp/', try_number="10", time_out="60"):
        #threading.Thread(target=self._wget_dl, args=(url, destination, try_number, time_out, log_file)).start()
        if self._wget_dl(url, destination, try_number, time_out, log_file) == 0:
            return True
        else:
            return False


    def _wget_dl(self,url, destination, try_number, time_out):
        import subprocess
        command=["wget", "-c", "-P", destination, "-t", try_number, "-T", time_out , url]
        try:
            download_state=subprocess.call(command)
        except Exception as e:
            print(e)
        #if download_state==0 => successfull download
        return download_state


回答6:

Let me Improve a example with threads in case you want download many files.

import math
import random
import threading

import requests
from clint.textui import progress

# You must define a proxy list
# I suggests https://free-proxy-list.net/
proxies = {
    0: {'http': 'http://34.208.47.183:80'},
    1: {'http': 'http://40.69.191.149:3128'},
    2: {'http': 'http://104.154.205.214:1080'},
    3: {'http': 'http://52.11.190.64:3128'}
}


# you must define the list for files do you want download
videos = [
    "https://i.stack.imgur.com/g2BHi.jpg",
    "https://i.stack.imgur.com/NURaP.jpg"
]

downloaderses = list()


def downloaders(video, selected_proxy):
    print("Downloading file named {} by proxy {}...".format(video, selected_proxy))
    r = requests.get(video, stream=True, proxies=selected_proxy)
    nombre_video = video.split("/")[3]
    with open(nombre_video, 'wb') as f:
        total_length = int(r.headers.get('content-length'))
        for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length / 1024) + 1):
            if chunk:
                f.write(chunk)
                f.flush()


for video in videos:
    selected_proxy = proxies[math.floor(random.random() * len(proxies))]
    t = threading.Thread(target=downloaders, args=(video, selected_proxy))
    downloaderses.append(t)

for _downloaders in downloaderses:
    _downloaders.start()


回答7:

A solution that I often find simpler and more robust is to simply execute a terminal command within python. In your case:

import os
url = 'https://www.someurl.com'
os.system(f"""wget -c --read-timeout=5 --tries=0 "{url}"""")


标签: python wget