Check failed: mdb_status == 0 (2 vs. 0) No such fi

2019-02-24 19:21发布

问题:

I received the following error while I was training the data. I have tried all the solutions given on the internet and nothing seems to work for me. I have checked paths and size of the lmdb files are non-zero. But the problem still exists. I have no idea how to solve this issue.

pooling_
I0411 12:42:53.114141 21769 layer_factory.hpp:77] Creating layer data
I0411 12:42:53.114586 21769 net.cpp:91] Creating Layer data
I0411 12:42:53.114604 21769 net.cpp:399] data -> data
I0411 12:42:53.114645 21769 net.cpp:399] data -> label
F0411 12:42:53.114650 21772 db_lmdb.hpp:14] Check failed: mdb_status == 0 (2 
vs. 0) No such file or directory
*** Check failure stack trace: ***
I0411 12:42:53.114673 21769 data_transformer.cpp:25] Loading mean file from: 
/home/Documents/Test/Images300/train_image_mean.binaryproto
@ 0x7fa9436a3daa (unknown)
@ 0x7fa9436a3ce4 (unknown)
@ 0x7fa9436a36e6 (unknown)
@ 0x7fa9436a6687 (unknown)
@ 0x7fa943b0472e caffe::db::LMDB::Open()
@ 0x7fa943afc644 caffe::DataReader::Body::InternalThreadEntry()
@ 0x7fa940e46a4a (unknown)
@ 0x7fa9406fe182 start_thread
@ 0x7fa942a8a47d (unknown)
@ (nil) (unknown)
Aborted (core dumped)

Below is my file settings:

name: "GoogleNet"
layer {
    name: "data"
    type: "Data"
    top: "data"
    top: "label"
    include {
        phase: TRAIN
    }
    transform_param {
        mirror: true
        crop_size: 224
        mean_file: "/home/Documents/Test/Images300/train_image_mean.binaryproto"
    }
    data_param {
        source: "/home/caffe/examples/zImageDetection/ImageDetection_train_lmdb"
        batch_size: 32
        backend: LMDB
    }
}
layer {
    name: "data"
    type: "Data"
    top: "data"
    top: "label"
    include {
        phase: TEST
    }
    transform_param {
        mirror: false
        crop_size: 224
        mean_file: "/home/Documents/Test/Image300/test_image_mean.binaryproto"
    }
    data_param {
        source: "/home/caffe/examples/zImageDetection/ImageDetection_val_lmdb"
        batch_size: 50
        backend: LMDB
    }
}

回答1:

You have not set your paths to the LMDB directories correctly. Go to the directory where you have created your LMDBs and get the absolute paths using this command:

$ readlink -f <LMDB_directory_name>

Use this path, it should solve your problem.



回答2:

To expand on Harsh's answer:

Make sure you carefully read the set-up steps on the Caffe Imagenet page. Some of the steps you have to carry out are embedded within the text; not all of them are in code boxes.

Specific to this case, you have to edit file examples/imagenet/create_imagenet.sh, replacing the path/to references with the correct path in your environment: this is wherever the imagenet files live. Lines 9&10 need your attention:

TRAIN_DATA_ROOT=/path/to/imagenet/train/
VAL_DATA_ROOT=/path/to/imagenet/val/

Also, at line 5, make sure that your EXAMPLE variable is set to a location with enough space for the compressed images: train requires 41Gb, but the pre-processing high-water mark is at least 55Gb. test occupies only 1.7Gb.