Freeing unused memory?

2019-01-28 21:47发布

问题:

I'm using the following function for downloading files smaller than 20MB. It read the entire content to memory as another function has to perform work on the bytes before it can be written to disk.

func getURL(url string) ([]byte, error) {
    resp, err := http.Get(url)
        if err != nil {
            return nil, fmt.Errorf("getURL: %s", err)
    }
    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
            return nil, fmt.Errorf("getURL: %s", err)
    }

    return body, nil
}

This works fine, but all memory is consumed on the system.

Is it possible to release memory used by body after it has been processed by another function, so memory use won't be larger than the bytes currently being processed?

回答1:

First I recommend to read the following questions / answers:

FreeOSMemory() in production

Golang - Cannot free memory once occupied by bytes.Buffer

You may trigger a gc to free unused objects with runtime.GC() and you may urge your Go runtime to release memory back to OS with debug.FreeOSMemory(), but all these are just fire fighting. A well-written Go app should never have to call these.

What you should do is prevent the runtime having to allocate large amount of memory.

How may you achieve this? Some means (you can even combine these solutions):

  • Limit serving the requests requiring large memory, more about it: Process Management for the Go Webserver; also Is this an idiomatic worker thread pool in Go?

  • Use memory / buffer pools, do not allocate big arrays / slices all the time, more about it: How to implement Memory Pooling in Golang

  • Create / change your processing units not to operate on byte slices but on io.Readers, so you don't need to read all content into memory, you can just pass resp.Body on. Note that even if multiple units have to read / inspect the body, it is still possible to only read and process it once, and not keep it in memory. Means may be io.Pipe(), io.TeeReader() or custom solutions.