Different outcomes when using tf.Variable() and tf

2019-07-17 01:46发布

I'm trying to get familiar with TensorFlow framework from this site by playing around with Linear Regression (LR). The source code for LR can be found here, with the name 03_linear_regression_sol.py.

Generally, the defined model for LR is Y_predicted = X * w + b, where

  • w and b are parameters (tf.Variable)
  • Y_predicted and X are training data (placeholder)

For w and b, in the sample code, they are defined as following

w = tf.Variable(0.0, name='weights')
b = tf.Variable(0.0, name='bias')

And I changed these two lines of code a little bit as following

w = tf.get_variable('weights', [], dtype=tf.float32)
b = tf.get_variable('bias', [], dtype=tf.float32)

For this experiment, I got two different total_loss/n_samples for those two versions. More specifically, in the original version, I got a deterministic result at anytime, 1539.0050282141283. But, in the modified version, I got undeterministic results at different running time, for example, total_loss/n_samples could be 1531.3039793868859, 1526.3752814714044, ... etc.

What is the difference between tf.Variable() and tf.get_variable()?

1条回答
疯言疯语
2楼-- · 2019-07-17 02:20

tf.Variable accepts an initial value upon creation (a constant), this explains deterministic results when you use it.

tf.get_variable is slightly different: it has an initializer argument, by default None, which is interpreted like this:

If initializer is None (the default), the default initializer passed in the variable scope will be used. If that one is None too, a glorot_uniform_initializer will be used.

Since you didn't pass an initializer, the variable got uniform random initial value.

查看更多
登录 后发表回答