I'm trying to get familiar with TensorFlow framework from this site by playing around with Linear Regression (LR). The source code for LR can be found here, with the name 03_linear_regression_sol.py
.
Generally, the defined model for LR is Y_predicted = X * w + b
, where
w
andb
are parameters (tf.Variable
)Y_predicted
andX
are training data (placeholder
)
For w
and b
, in the sample code, they are defined as following
w = tf.Variable(0.0, name='weights')
b = tf.Variable(0.0, name='bias')
And I changed these two lines of code a little bit as following
w = tf.get_variable('weights', [], dtype=tf.float32)
b = tf.get_variable('bias', [], dtype=tf.float32)
For this experiment, I got two different total_loss/n_samples
for those two versions. More specifically, in the original version, I got a deterministic result at anytime, 1539.0050282141283
. But, in the modified version, I got undeterministic results at different running time, for example, total_loss/n_samples
could be 1531.3039793868859
, 1526.3752814714044
, ... etc.
What is the difference between tf.Variable()
and tf.get_variable()
?
tf.Variable
accepts an initial value upon creation (a constant), this explains deterministic results when you use it.tf.get_variable
is slightly different: it has aninitializer
argument, by defaultNone
, which is interpreted like this:Since you didn't pass an initializer, the variable got uniform random initial value.