We can train an autoencoder in pylearn2 using below YAML file (along with pylearn2/scripts/train.py)
!obj:pylearn2.train.Train {
dataset: &train !obj:pylearn2.datasets.mnist.MNIST {
which_set: 'train',
start: 0,
stop: 50000
},
model: !obj:pylearn2.models.autoencoder.DenoisingAutoencoder {
nvis : 784,
nhid : 500,
irange : 0.05,
corruptor: !obj:pylearn2.corruption.BinomialCorruptor {
corruption_level: .2,
},
act_enc: "tanh",
act_dec: null, # Linear activation on the decoder side.
},
algorithm: !obj:pylearn2.training_algorithms.sgd.SGD {
learning_rate : 1e-3,
batch_size : 100,
monitoring_batches : 5,
monitoring_dataset : *train,
cost : !obj:pylearn2.costs.autoencoder.MeanSquaredReconstructionError {},
termination_criterion : !obj:pylearn2.termination_criteria.EpochCounter {
max_epochs: 10,
},
},
save_path: "./dae_l1.pkl",
save_freq: 1
}
What we get is the learned autoencoder model as "dae_l1.pkl".
If I want to use this model for supervised training, I can use "dae_l1.pkl" to initialize the layer of an MLP. I can then train this model. I can even predict the output of the model using 'fprop' function.
But what if I dun want to use this pretrained model for supervised learning and I just want to save the new learned representation of my data with the autoencoder.
How can I do this?
Even more detailed question is put here
I think you can use the encode and decode functions of the autoencoder to get the hidden representation. E.g:
Then, the representation will be:
The
reconstruct
method of the pickled model should do it - I believe usage is the same asfprop
.