Inserting Large Object into Postgresql returns 532

2019-03-06 05:17发布

Postgresql 9.1 NPGSQL 2.0.12

I have binary data I am wanting to store in a postgresql database. Most files load fine, however, a large binary (664 Mb) file is causing problems. When trying to load the file to postgresql using Large Object support through Npgsql, the postgresql server returns 'out of memory' error.

I'm running this at present on a workstation with 4Gb RAM, with 2Gb free with postgresql running in an idle state.

This is the code I am using, adapted from PG Foundry Npgsql User's Manual.

using (var transaction = connection.BeginTransaction())
{
    try
    {
        var manager = new NpgsqlTypes.LargeObjectManager(connection);
        var noid = manager.Create(NpgsqlTypes.LargeObjectManager.READWRITE);
        var lo = manager.Open(noid, NpgsqlTypes.LargeObjectManager.READWRITE);
        lo.Write(BinaryData);
        lo.Close();
        transaction.Commit();
        return noid;
    }
    catch
    {
        transaction.Rollback();
        throw;
    }
}

I've tried modifying postgresql's memory settings from defaults to all manner of values adjusting:

  • shared_buffers
  • work_mem
  • maintenance_work_mem

So far I've found postgresql to be a great database system, but this is a show stopper at present and I can't seem to get this sized file into the database. I don't really want to have to deal with manually chopping the file into chunks and recreating client side if I can help it.

Please help!?

2条回答
走好不送
2楼-- · 2019-03-06 05:57

Try decreasing the number of max_connections to reserve memory for the few connections that need to do one 700MB operation. Increase the work_mem (which is memory available per-operation) to 1GB. Trying to cram 700MB into one field sounds odd.

Increase the shared_buffers size to 4096MB.

查看更多
smile是对你的礼貌
3楼-- · 2019-03-06 06:05

I think the answer appears to be calling the Write() method of the LargeObject class iteratively with chunks of the byte array. I know I said I didn't want to have to deal with chunking the data, but what I really meant was chunking the data into separate LargeObjects. This solution means I chunk the array, but it is still stored in the database as one object, meaning I don't have to keep track of file parts, just the one oid.

 do 
 {
   var length = 1000;
   if (i + length > BinaryData.Length) length = BinaryData.Length - i;
   byte[] chunk = new byte[length];
   Array.Copy(BinaryData, i, chunk, 0, length);
   lo.Write(chunk, 0, length);
   i += length;
 } (i < BinaryData.Length)
查看更多
登录 后发表回答