Is it possible to get info about how much space is wasted by changes in every commit — so I can find commits which added big files or a lot of files. This is all to try to reduce git repo size (rebasing and maybe filtering commits)
相关问题
- Why does recursive submodule update from github fa
- Extended message for commit via Visual Studio Code
- Emacs shell: save commit message
- Can I organize Git submodules in a flat hierarchy?
- Upload file > 25 MB on Github
相关文章
- 请教Git如何克隆本地库?
- GitHub:Enterprise post-receive hook
- Git Clone Fails: Server Certificate Verification F
- SSIS solution on GIT?
- Is there a version control system abstraction for
- ssh: Could not resolve hostname git: Name or servi
- Cannot commit changes with gitextensions
- git: retry if http request failed
Personally, I found this answer to be most helpful when trying to find large files in the history of a git repo: Find files in git repo over x megabytes, that don't exist in HEAD
Forgot to reply, my answer is:
git cat-file -s <object>
where<object>
can refer to a commit, blob, tree, or tag.git fat find N
where N is in bytes will return all the files in the whole history which are larger than N bytes.You can find out more about git-fat here: https://github.com/cyaninc/git-fat
You could do this:
This will show the largest files at the bottom (fourth column is the file (blob) size.
If you need to look at different branches you'll want to change HEAD to those branch names. Or, put this in a loop over the branches, tags, or revs you are interested in.