可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have a GIT repository on BitBucket which is more than 4GB.
I can't clone the repository using the normal GIT command as it fails (looks like it's working for a long time but then rolls back).
I also can't download the repository as a zip from the BitBucket interface as:
Feature unavailable This repository is too large for us to generate a download.
Is there any way to download a GIT repository incrementally?
回答1:
If you don't need to pull the whole history you could specify the number of revisions to clone
git clone <repo_url> --depth=1
Of course this might not help if you have a particularly large file in your repository
回答2:
One potential technique is just to clone a single branch. You can then pull in more later. Do git clone [url_of_remote] --branch [branch_name] --single-branch
.
Large repositories seem to be a major weakness with git. You can read about that at http://www.sitepoint.com/managing-huge-repositories-with-git/. This article mentions a git extension called git-annex that can help with large files. Check it out at https://git-annex.branchable.com/. It helps by allowing git to manage files without checking the files into git. Disclaimer, I've never tried it myself.
Some of the solutions at How do I clone a large Git repository on an unreliable connection? also may help.
EDIT: Since you just want the files you may be able to try git archive
. You'd use syntax something like
git archive --remote=ssh://git@bitbucket.org/username/reponame.git --format=tar --output="file.tar" master
I tried to test on a repo at my AWS Codecommit account but it doesn't seem to allow it. Someone on BitBucket may be able to test. Note that on Windows you'd want to use zip rather than tar, and this all has to be done over an ssh connection not https.
Read more about git archive
at http://git-scm.com/docs/git-archive
回答3:
For me, helped perfectly, like is described in this answer: https://stackoverflow.com/a/22317479/6332374, but with one little improvement, because of big repo:
At first:
git config --global core.compression 0
then, clone just a part of your repo:
git clone --depth 1 <repo_URI>
and now "the rest"
git fetch --unshallow
but here is the trick.: When you have a big repo sometimes you must perform that step multiple times. So... again,
git fetch --unshallow
and so on.
Try multiple times. Probably you will see, that each time you perform 'unshallow' you get more and more objects before the error.
And at the end, just to be sure.
git pull --all
回答4:
I got it to work by using this method fatal: early EOF fatal: index-pack failed
But only after I setup SSL - this method still didn't work over HTTP.
The support at BitBucket was really helpful and pointed me in this direction.
回答5:
BitBucket should have a way to build an archive even for large repo with Git 2.13.x/2.14 (Q3 2017)
See commit 867e40f (30 Apr 2017), commit ebdfa29 (27 Apr 2017), commit 4cdf3f9, commit af95749, commit 3c78fd8, commit c061a14, and commit 758c1f9, by Rene Scharfe.
(Merged by Junio C Hamano -- gitster
-- in commit f085834, 16 May 2017)
archive-zip
: support files bigger than 4GB
Write a zip64
extended information extra field for big files as part of their local headers and as part of their central directory headers.
Also write a zip64
version of the data descriptor in that case.
If we're streaming then we don't know the compressed size at the time we write the header. Deflate can end up making a file bigger instead of smaller if we're unlucky.
Write a local zip64
header already for files with a size of 2GB or more in this case to be on the safe side.
Both sizes need to be included in the local zip64
header, but the extra field for the directory must only contain 64-bit equivalents for 32-bit values of 0xffffffff
.
回答6:
1) you can initially download the single branch having only the latest commit revision (depth=1), this will significantly reduce the size of the repo to download and still let you work on the code base:
git clone --depth <Number> <repository> --branch <branch name> --single-branch
example:
git clone --depth 1 https://github.com/dundermifflin/dwightsecrets.git --branch scranton --single-branch
2) later you can get all the commits (after this your repo will be in the same state as after a git clone):
git fetch --unshallow
or if it's still too much, get only last 25 commits:
git fetch --depth=25
Other way: git clone
is not resumable but you can first git clone
on a third party server and then download the complete repo over http/ftp which is actually resumable.
回答7:
You can only clone the first commit and then the second commit...etc. It will be easier to pull if the difference between two commits is not very large. You can see more details from this answer.