可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have googled and found many solutions but none work for me.
I am trying to clone from one machine by connecting to the remote server which is in the LAN network.
Running this command from another machine cause error.
But running the SAME clone command using git://192.168.8.5 ... at the server it\'s okay and successful.
Any ideas ?
user@USER ~
$ git clone -v git://192.168.8.5/butterfly025.git
Cloning into \'butterfly025\'...
remote: Counting objects: 4846, done.
remote: Compressing objects: 100% (3256/3256), done.
fatal: read error: Invalid argument, 255.05 MiB | 1.35 MiB/s
fatal: early EOF
fatal: index-pack failed
I have added this config in .gitconfig
but no help also.
Using the git version 1.8.5.2.msysgit.0
[core]
compression = -1
回答1:
First, turn off compression:
git config --global core.compression 0
Next, let\'s do a partial clone to truncate the amount of info coming down:
git clone --depth 1 <repo_URI>
When that works, go into the new directory and retrieve the rest of the clone:
git fetch --unshallow
or, alternately,
git fetch --depth=2147483647
Now, do a regular pull:
git pull --all
I think there is a glitch with msysgit in the 1.8.x versions that exacerbates these symptoms, so another option is to try with an earlier version of git (<= 1.8.3, I think).
回答2:
This error may occur for memory needs of git. You can add these lines to your global git configuration file, which is .gitconfig
in $USER_HOME
, in order to fix that problem.
[core]
packedGitLimit = 512m
packedGitWindowSize = 512m
[pack]
deltaCacheSize = 2047m
packSizeLimit = 2047m
windowMemory = 2047m
回答3:
finally solved by git config --global core.compression 9
From a BitBucket issue thread:
I tried almost five times, and it still happen.
Then I tried to use better compression and it worked!
git config --global core.compression 9
From the Git Documentation:
core.compression
An integer -1..9, indicating a default compression
level. -1 is the zlib default.
0 means no compression, and 1..9 are
various speed/size tradeoffs, 9 being slowest.
If set, this provides a
default to other compression variables, such as core.looseCompression
and pack.compression.
回答4:
I got this error when git ran out of memory.
Freeing up some memory (in this case: letting a compile job finish) and trying again worked for me.
回答5:
I tried all of that commands and none works for me, but what works was change the git_url to http instead ssh
if is clone command do :
git clone <your_http_or_https_repo_url>
else if you are pulling on existing repo, do it with
git remote set-url origin <your_http_or_https_repo_url>
hope this help someone!
回答6:
In my case it was a connection problem. I was connected to an internal wifi network, in which I had limited access to ressources. That was letting git do the fetch but at a certain time it crashed.
This means it can be a network-connection problem. Check if everything is running properly: Antivirus, Firewall, etc.
The answer of elin3t is therefore important because ssh improves the performance of the downloading so that network problems can be avoided
回答7:
In my case this was quite helpful:
git clone --depth 1 --branch $BRANCH $URL
This will limit the checkout to mentioned branch only, hence will speed up the process.
Hope this will help.
回答8:
As @ingyhere said:
Shallow Clone
First, turn off compression:
git config --global core.compression 0
Next, let\'s do a partial clone to truncate the amount of info coming down:
git clone --depth 1 <repo_URI>
When that works, go into the new directory and retrieve the rest of the clone:
git fetch --unshallow
or, alternately,
git fetch --depth=2147483647
Now, do a pull:
git pull --all
Then to solve the problem of your local branch only tracking master
open your git config file (.git/config) in the editor of your choice
where it says:
[remote \"origin\"]
url=<git repo url>
fetch = +refs/heads/master:refs/remotes/origin/master
change the line
fetch = +refs/heads/master:refs/remotes/origin/master
to
fetch = +refs/heads/*:refs/remotes/origin/*
Do a git fetch and git will pull all your remote branches now
回答9:
In my case nothing worked when the protocol was https, then I switched to ssh, and ensured, I pulled the repo from last commit and not entire history, and also specific branch. This helped me:
git clone --depth 1 \"ssh:.git\" --branch “specific_branch”
回答10:
Make sure your drive has enough space left
回答11:
Note that Git 2.13.x/2.14 (Q3 2017) does raise the default core.packedGitLimit
which influences git fetch
:
The default packed-git limit value has been raised on larger platforms (from 8 GiB to 32 GiB) to save \"git fetch
\" from a (recoverable) failure while \"gc
\" is running in parallel.
See commit be4ca29 (20 Apr 2017) by David Turner (csusbdt
).
Helped-by: Jeff King (peff
).
(Merged by Junio C Hamano -- gitster
-- in commit d97141b, 16 May 2017)
Increase core.packedGitLimit
When core.packedGitLimit
is exceeded, git will close packs.
If there is a repack operation going on in parallel with a fetch, the fetch
might open a pack, and then be forced to close it due to packedGitLimit being hit.
The repack could then delete the pack out from under the fetch, causing the fetch to fail.
Increase core.packedGitLimit
\'s default value to prevent this.
On current 64-bit x86_64 machines, 48 bits of address space are available.
It appears that 64-bit ARM machines have no standard amount of address space (that is, it varies by manufacturer), and IA64 and POWER machines have the full 64 bits.
So 48 bits is the only limit that we can reasonably care about. We reserve a few bits of the 48-bit address space for the kernel\'s use (this is not strictly
necessary, but it\'s better to be safe), and use up to the remaining 45.
No git repository will be anywhere near this large any time soon, so this should prevent the failure.
回答12:
I turned off all the downloads I was doing in the meantime, which freed some space probably and cleared up/down bandwidth
回答13:
The git-daemon issue seems to have been resolved in v2.17.0 (verified with a non working v2.16.2.1).
I.e. workaround of selecting text in console to \"lock output buffer\" should no longer be required.
From https://github.com/git/git/blob/v2.17.0/Documentation/RelNotes/2.17.0.txt:
- Assorted fixes to \"git daemon\".
(merge ed15e58efe jk/daemon-fixes later to maint).
回答14:
I have the same problem. Following the first step above i was able to clone, but I cannot do anything else. Can\'t fetch, pull or checkout old branches.
Each command runs much slower than usual, then dies after compressing the objects.
I:\\dev [master +0 ~6 -0]> git fetch --unshallow
remote: Counting objects: 645483, done.
remote: Compressing objects: 100% (136865/136865), done.
error: RPC failed; result=18, HTTP code = 20082 MiB | 6.26 MiB/s
fatal: early EOF
fatal: The remote end hung up unexpectedly
fatal: index-pack failed
This also happens when your ref\'s are using too much memory. Pruning the memory fixed this for me. Just add a limit to what you fetching like so ->
git fetch --depth=100
This will fetch the files but with the last 100 edits in their histories.
After this, you can do any command just fine and at normal speed.
回答15:
This worked for me, setting up Googles nameserver because no standard nameserver was specified, followed by restarting networking:
sudo echo \"dns-nameservers 8.8.8.8\" >> /etc/network/interfaces && sudo ifdown venet0:0 && sudo ifup venet0:0
回答16:
None of these worked for me, but using Heroku\'s built in tool did the trick.
heroku git:clone -a myapp
Documentation here: https://devcenter.heroku.com/articles/git-clone-heroku-app
回答17:
From a git clone, I was getting:
error: inflate: data stream error (unknown compression method)
fatal: serious inflate inconsistency
fatal: index-pack failed
After rebooting my machine, I was able to clone the repo fine.
回答18:
If you\'re on Windows, you may want to check git clone fails with "index-pack" failed?.
Basically, after running your git.exe daemon ...
command, select some text from that console window. Retry pulling/cloning, it might just work now!
See this answer for more info.