I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4
, and to which I would most likely not have access to make modifications.
You can use the
debugfs
utility,First, run
debugfs /dev/hda13
in your terminal (replacing/dev/hda13
with your own disk/partition).(NOTE: You can find the name of your disk by running
df /
in the terminal).Once in debug mode, you can use the command
lsdel
to list inodes corresponding with deleted files.To get paths of these deleted files you can use
debugfs -R "ncheck 320236"
replacing the number with your particular inode.From here you can also inspect the contents of deleted files with
cat
. (NOTE: You can also recover from here if necessary).Great post about this here.
Thanks for your comments & answers guys.
debugfs
seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built withdebugfs
support and the target directory must be in adebugfs
mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
find
command piped intowc
to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.