Memory Estimation for Convolution Neural Network i

2019-07-24 23:33发布

Hello Everyone,

I am working on a Image classification problem using tensorflow and Convolution Neural Network. My model is having following layers.

  • Input image of size 2456x2058
  • 3 convolution Layer {Con1-shape(10,10,1,32); Con2-shape(5,5,32,64); Con3-shape(5,5,64,64)}
  • 3 max pool 2x2 layer
  • 1 fully connected layer.

I have tried using the NVIDIA-SMI tool but it shows me the GPU memory consumption as the model runs.
I would like to know if there is any method or a way to find the estimate of memory before running the model on GPU. So that I can design models with the consideration of available memory.
I have tried using this method for estimation but my calculated memory and observed memory utilisation are no where near to each other.

Thank you all for your time.

3条回答
狗以群分
2楼-- · 2019-07-25 00:03

As far as I understand, when you open a session with tensorflow-gpu, it allocates all the memory in the GPUS that are available. So, when you look at the nvidia-smi output, you will always see the same amount of used memory, even if it actually uses only a part of it. There are options when opening a session to force tensorflow to allocate only a part of the available memory (see How to prevent tensorflow from allocating the totality of a GPU memory? for instance)

查看更多
beautiful°
3楼-- · 2019-07-25 00:04

You can control the memory allocation of GPU in TensorFlow. Once you calculated your memory requirements for your Deep learning model you can use tf.GPUOptions.

For example if you want to allocate 4 GB(approximately) of GPU memory out of 8 GB.

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

Once done pass it in tf.Session using config parameter

The per_process_gpu_memory_fraction is used to bound the available amount of GPU memory.

Here's the link to documentation :-

https://www.tensorflow.org/tutorials/using_gpu

查看更多
疯言疯语
4楼-- · 2019-07-25 00:05

NVIDIA-SMI ... shows me the GPU memory consumption as the model run

TF pre-allocates all available memory when you use it, so NVIDIA-SMI would show nearly 100% memory usage ...

but my calculated memory and observed memory utilisation are no where near to each other.

.. so this is unsurprising.

查看更多
登录 后发表回答