Optimizing Git Cloning for Large Repository with On-Demand Access to Version History

53 Views Asked by At

I'm working with a large private Git repository with a single branch and I need to optimize the cloning process. My requirement is to later access the Git version history and the different versions of the files on the go without much delay.

I'm looking for a way to optimize this process. Ideally, I want to clone the repository quickly, and then be able to access the version history and different versions of the files on demand, without much delay.

Any suggestions or ideas would be greatly appreciated. Thanks in advance!

Here's what I've tried so far:

  1. I initially tried a git clone with depth=1 and then a git fetch separately to get the logs and the previous version files later on demand. This resulted in a bottleneck at the second part for a large repo.

  2. Next, I tried a clone with depth=1 and a shallow fetch to only get a certain number of commits. It wasn't much faster than the previous approach and I ended up losing the previous version history for the files.

  3. Finally, I tried doing a partial clone using git clone --filter=blob:none <path_to_folder_or_file>. However, after the clone and a checkout to access the blob files, this ended up even slower than a regular clone.

0

There are 0 best solutions below