What is the default global variable_scope
in Tensorflow? How can I inspect the object? Does anyone have ideas about that?
相关问题
- batch_dot with variable batch size in Keras
- How to use Reshape keras layer with two None dimen
- CV2 Image Error: error: (-215:Assertion failed) !s
- Why keras use “call” instead of __call__?
- JavaScript variable scope question: to var, or not
相关文章
- tensorflow 神经网络 训练集准确度远高于验证集和测试集准确度?
- Tensorflow: device CUDA:0 not supported by XLA ser
- Numpy array to TFrecord
- conditional graph in tensorflow and for loop that
- How to downgrade to cuda 10.0 in arch linux?
- Apply TensorFlow Transform to transform/scale feat
- How to use cross_val_score with random_state
- How to force tensorflow tensors to be symmetric?
Technically, there's no global variable scope for all variables. If you run
from the top level of your script, a new variable
x
without a variable scope will be created in the default graph.However, the situation is a bit different for
tf.get_variable()
function:The first thing it does is calls
tf.get_variable_scope()
function, which returns the current variable scope, which in turn looks up the scope from the local stack:Note that this stack can be empty and in this case, a new scope is simply created and pushed on top of the stack.
If this is the object you need, you can access it just by calling:
from the top level, or by going to
ops.get_collection(_VARSCOPE_KEY)
directly if you're inside a scope already. This is exactly the scope that a new variable will get by a call totf.get_variable()
function. It's an ordinary instance of classtf.VariableScope
that you can easily inspect.