How to git fetch efficiently from a shallow clone

2019-01-06 13:48发布

问题:

We use git to distribute an operating system and keep it upto date. We can't distribute the full repository since it's too large (>2GB), so we have been using shallow clones (~300M). However recently when fetching from a shallow clone, it's now inefficiently fetches the entire >2GB repository. This is an untenable waste of bandwidth for deployments.

The git documentation says you cannot fetch from a shallow repository, though that's strictly not true. Are there any workarounds to make a git clone --depth 1 able to fetch just what's changed from it? Or some other strategy to keep the distribution size as small as possible whilst having all the bits git needs to do an update?

I have unsuccessfully tried cloning from --depth 20 to see if it will upgrade more efficiently, that didn't work. I did also look into http://git-scm.com/docs/git-bundle, but that seems to create huge bundles.

回答1:

--depth is a git fetch option. I see the doc doesn't really highlight that git clone does a fetch.

When you fetch, the two repos swap info on who has what by starting from the remote's heads and searching backward for the most recent shared commit in the fetched refs' histories, then filling in all the missing objects to complete just the new commits between the most recent shared commits and the newly fetched ones.

A --depth=1 fetch just gets the branch tips and no prior history. Further fetches of those histories will fetch everything new by the above procedure, but if the previously-fetched commits aren't in the newly fetched history, fetch will retrieve all of it -- unless you limit the fetch with --depth.

Your client did a depth=1 fetch from one repo and switched urls to a different repo. At least one long ancestry path in this new repo's refs apparently shares no commits with anything currently in your repo. That might be worth investigating, but either way unless there's some particular reason, your clients can just do every fetch --depth=1.



回答2:

Just did g clone github.com:torvalds/linux and it took so much time, so I just skipped it by CTRL+C.

Then did g clone github.com:torvalds/linux --depth 1 and it did cloned quite fast. And I have only one commit in git log.

So clone --depth 1 should work. If you need to update existing repository, you should use git fetch origin branchname:branchname --depth 1. It works too, it fetches only one commit.

Summing up:

Initial clone:

git clone git_url --depth 1

Code update

git fetch origin branch:branch --depth 1


回答3:

Note that Git 1.9/2.0 (Q1 2014) could be more efficient in fetching for a shallow clone.
See commit 82fba2b, from Nguyễn Thái Ngọc Duy (pclouds):

Now that git supports data transfer from or to a shallow clone, these limitations are not true anymore.

All the details are in "shallow.c: the 8 steps to select new commits for .git/shallow".

You can see the consequence in commits like 0d7d285, f2c681c, and c29a7b8 which support clone, send-pack /receive-pack with/from shallow clones.
smart-http now supports shallow fetch/clone too.
You can even clone form a shallow repo.

Update 2015: git 2.5+ (Q2 2015) will even allow for a single commit fetch! See "Pull a specific commit from a remote git repository".

Update 2016 (Oct.): git 2.11+ (Q4 2016) allows for fetching:

  • since a date --shallow-since=<date>
  • with a greater depth: --deepen=N


回答4:

If you can select a specific branch, it can be even faster. Here's an example using Spark master branch and latest tag:

Initial clone

git clone git@github.com:apache/spark.git --branch master --single-branch --depth 1

Update to specific tag

git fetch --depth 1 origin tags/v1.6.0

It becomes very fast to switch tags/branch this way.



回答5:

I don't know if it suites your set-up but what I use is to have ha full clone of a repo in a separate directory. Then I do shallow clone from the remote repository with reference to the local one.

git clone --depth 1 --reference /path/to/local/clone git@some.com/group/repo.git 

That way only the differences with the reference repository and remote are actually fetched. To make it even quicker you can use the --shared option, but be sure to read about the restrictions in the git documentation (it can be dangerous).

Also I found out that in some circumstances when the remote has changed a lot, the clone starts fetching too much data. It is good to break it then and update the reference repo (which strangely takes much less bandwidth than it took in the first place.) And then start the clone again.