I have created a large amount of files using this Python script that I used primarily to benchmark Git.
The result is very surprising especially the differences between Windows and Linux.
Basically my script creates 12 directories with 512 files in each of them. Each file is about 2 to 4 kB. With the Git objects the repository is about 12k files.
I benchmarked:
- Time to add all the files to git
git add .
- Time to checkout a previous branch
- Time to copy the repository on the same SSD
- Time to copy the repository to an external Sata HDD (NTFS)
I did this with the very same repository on both Windows 10 and Linux:
Operation Linux Windows Ratio
--------- ----- ------- -----
1. git add . 0.47s 21.7s x46
2. git checkout HEAD~1 0.35s 16.2s x46
3. git checkout . 0.40s 20.5s x50
4. cp -r ssd->ssd 0.35s 1m14s x211
5. cp -r ssd->hdd 4.90s 6m25s x78
The operation was done in this order:
$ mkdir test
$ cp test.py test && cd test
$ ./test.py # Creation of the files
$ git init
$ time git add . # (1)
$ git commit -qam 1
$ ./test.py # Alter some files
$ commit -qam 2
$ cd ..
$ time cp -r test /media/hdd/ # (4)
$ time cp -r test test2 # (5)
$ cd test
$ time git checkout HEAD~1 # (2)
$ ./test.py
$ git checkout master
$ git reset --soft head~1
$ time git checkout . # (3)
The benchmark was done on the same PC (using dual boot).
Why such differences? I can not believe it.