Where can I find documentation about the Embedding Projector included in Tensorboard? Here, there are some references to it, but there's no step-by-step example/tutorial on how to use it.
标签:
tensorboard
相关问题
- Failing to launch tensorboard from jupyter
- AttributeError: module 'tensorflow.python.summ
- Tensorflow Keras - AttributeError: Layer features
- Tensorflow profiling in TF2.0
- AttributeError: module 'tensorboard.util'
相关文章
- ValueError: Duplicate plugins for name projector
- Tensorboard error: 'Tensor' object has no
- Tensorboard histograms to matplotlib
- Linking Tensorboard Embedding Metadata to checkpoi
- Gradients are always zero
- Tensorflow visualizer “Tensorboard” not working un
- How to display Runtime Statistics in Tensorboard u
- TensorBoard not working
It sounds like you want to get the Visualization section with t-SNE running on TensorBoard. As you've described, the API of Tensorflow has only provided the bare essential commands in the how-to document.
I’ve uploaded my working solution with the MNIST dataset to my GitHub repo.
Original Stackoverflow answer: TensorBoard Embedding Example?
@Ehsan
Your explanation is very good. The key here is that every Variable has to be initialized before saver.save(...) call.
@Everyone
Also, tensorboard embedding is simply visualizing instances of saved Variable class. It doesn't care about whether it's words or images or anything else.
The official doc https://www.tensorflow.org/get_started/embedding_viz does not point out that it is a direction visualization of matrix, which in my opinion, introduced a lot of confusion.
Maybe you wonder what does it mean to visualize a matrix. A matrix can be interpreted as a collection of points in a space.
If I have a matrix with shape (100, 200), I can interpret it as a collection of 100 points, where each point has 200 dimension. In another words, 100 points in a 200 dimension space.
In the word2vec case, we have 100 words where each word is represented with a 200 length vector. Tensorboard embedding simply uses PCA or T-SNE to visualize this collection(matrix).
Therefore, you can through any random matrices. If you through an image with shape (1080, 1920), it will visualize each row of this image as if it's a single point.
That been said, you can visualize the embedding of any Variable class instances by simply saving then
saver = tf.train.Saver([a, _list, of, wanted, variables]) ...some code you may or may not have... saver.save(sess, os.path.join(LOG_DIR, 'filename.ckpt'))
I will try to make a detailed tutorial later.
As far as I am aware this is the only documentation about embedding visualization on the TensorFlow website. Though the code snippet might not be very instructive for the first time users, so here is an example usage:
Here first we create a TensoFlow variable (
images
) and then save it usingtf.train.Saver
. After executing the code we can launch TensorBoard by issuingtensorboard --logdir=logs
command and openinglocalhost:6006
in a browser.However this visualisation is not very helpful because we do not see different classes to which each data point belongs. In order to distinguish each class from another one should provider some metadata:
Which gives us:
Sadly, I cannot find a more comprehensive documentation. Below I collect all related resources:
PS: Thanks for upvoting me. Now I can post all the links.