I'm coding a ShareX clone for Linux in Go that uploads files and images to file sharing services through http POST requests.
I'm currently using http.Client and Do() to send my requests, but I'd like to be able to track the upload progress for bigger files that take up to a minute to upload.
The only way I can think of at the moment is manually opening a TCP connection on port 80 to the website and write the HTTP request in chunks, but I don't know if it would work on https sites and I'm not sure if it's the best way to do it.
Is there any other way to achieve this?
You can create your own io.Reader
to wrap the actual reader and then you can output the progress each time Read
is called.
Something along the lines of:
type ProgressReader struct {
io.Reader
Reporter func(r int64)
}
func (pr *ProgressReader) Read(p []byte) (n int, err error) {
n, err = pr.Reader.Read(p)
pr.Reporter(int64(n))
return
}
func main() {
file, _ := os.Open("/tmp/blah.go")
total := int64(0)
pr := &ProgressReader{file, func(r int64) {
total += r
if r > 0 {
fmt.Println("progress", r)
} else {
fmt.Println("done", r)
}
}}
io.Copy(ioutil.Discard, pr)
}
Wrap the reader passed as the request body with something that reports progress. For example,
type progressReporter struct {
r io.Reader
max int
sent int
}
func (pr *progressReader) Read(p []byte) (int, error) {
n, err := pr.r.Read(p)
pr.sent += n
if err == io.EOF {
pr.atEOF = true
}
pr.report()
return n, err
}
func (pr *progressReporter) report() {
fmt.Printf("sent %d of %d bytes\n", pr.sent, pr.max)
if pr.atEOF {
fmt.Println("DONE")
}
}
If previously you called
client.Post(u, contentType, r)
then change the code to
client.Post(u, contentType, &progressReader{r:r, max:max})
where max
is the number of bytes you expect to send. Modify the progressReporter.report() method and add fields to progressReporter to meet your specific needs.