How to free up memory after base64 convert

2019-06-04 08:24发布

I am trying to stream the contents of a file. The code works for smaller files, but with larger files, I get an Out of Memory error.

public void StreamEncode(FileStream inputStream, TextWriter tw)
{
    byte[] base64Block = new byte[BLOCK_SIZE];
    int bytesRead = 0;

    try
    {
        do
        {
            // read one block from the input stream
            bytesRead = inputStream.Read(base64Block, 0, base64Block.Length);

            // encode the base64 string
            string base64String = Convert.ToBase64String(base64Block, 0, bytesRead);

            // write the string
            tw.Write(base64String);

        } while (bytesRead == base64Block.Length);
    }
    catch (OutOfMemoryException)
    {
        MessageBox.Show("Error -- Memory used: " + GC.GetTotalMemory(false) + " bytes");
    }
}

I can isolate the problem and watch the memory used grow as it loops.
The problem seems to be the call to Convert.ToBase64String().

How can I free the memory for the converted string?


Edited from here down ... Here is an update. I also created a new thread about this -- sorry I guess that was not the right thing to do.

Thanks for your great suggestions. From the suggestions, I shrunk the buffer size used to read from the file, and it looks like memory consumption is better, but I'm still seeing an OOM problem, and I'm seeing this problem with files sizes as small as 5MB. I potentially want to deal with files ten times larger.

My problem seems now to be with the use of TextWriter.

I create a request as follows [with a few edits to shrink the code]:

HttpWebRequest oRequest = (HttpWebRequest)WebRequest.Create(new Uri(strURL));
oRequest.Method = httpMethod;
oRequest.ContentType = "application/atom+xml";
oRequest.Headers["Authorization"] = getAuthHeader();
oRequest.ContentLength = strHead.Length + strTail.Length + longContentSize;
oRequest.SendChunked = true;

using (TextWriter tw = new StreamWriter(oRequest.GetRequestStream()))
{
    tw.Write(strHead);
    using (FileStream fileStream = new FileStream(strPath, FileMode.Open, 
           FileAccess.Read, System.IO.FileShare.ReadWrite))
    {
        StreamEncode(fileStream, tw);
    }
    tw.Write(strTail);
}
.....

Which calls into the routine:

public void StreamEncode(FileStream inputStream, TextWriter tw)
{
    // For Base64 there are 4 bytes output for every 3 bytes of input
    byte[] base64Block = new byte[9000];
    int bytesRead = 0;
    string base64String = null;

    do
    {
        // read one block from the input stream
        bytesRead = inputStream.Read(base64Block, 0, base64Block.Length);

        // encode the base64 string
        base64String = Convert.ToBase64String(base64Block, 0, bytesRead);

        // write the string
        tw.Write(base64String);


    } while (bytesRead !=0 );

}

Should I use something other than TextWriter because of the potential large content? It seems very convenient for being able to create the whole payload of the request.

Is this totally the wrong approach? I want to be able to support very large files.

7条回答
老娘就宠你
2楼-- · 2019-06-04 08:53

Try reducing the block size or avoid assigning the result of the Convert call to a variable:

bytesRead = inputStream.Read(base64Block, 0, base64Block.Length);
tw.Write(Convert.ToBase64String(base64Block, 0, bytesRead));
查看更多
登录 后发表回答