I need to make the repo smaller. I think I can make it smaller by removing problematic binary files from git history:
git filter-branch --index-filter 'git rm --cached --ignore-unmatch BigFile'
And then releasing the objects:
rm -rf .git/refs/original/
git reflog expire --expire=now --all
git gc --aggressive --prune=now
(Feel free to comment if those commands are wrong.)
The problem: How to identify those big files so that I can asses whether to remove them from git history? Most likely they are not in the working tree anymore - they have been deleted and probably also untracked with:
git rm --cached BigFile
I wrote a script that will tell you the largest objects, files, or directories in my answer here. Without arguments, it'll tell you the size of all objects, sorted by size. You can tell it
--sum
or--directories
to sum all the objects for each file and print that, or to do the same for all files in each directory. I hope it's useful!Couldn't help optimizing MatrixManAtYrService's answer:
This way
git rev-list
is called only once (and not per object being displayed), and the script is more clear.You can find the hash IDs of the largest objects like this:
Then, for a particular SHA, you can do this to get the file name:
Not sure if there's a more efficient way to do it. If you know for sure that everything is in pack files (not loose objects),
git verify-pack -v
produces output that includes the size, and I seem to remember seeing a script somewhere that would parse that output and match each object back up with the original files.twalberg's answer does the trick. I wrapped it up in a loop so that you can list files in order by size:
head -n 20
restricts the output to the top 20. Change as necessary.Once you've identified the problem files, check out this answer for how to remove them.