The case of an `HTTPError` was printed to stdout (and therefore globbed
by bazel).
While I'm at it, I also introduced a timeout to `urlopen` and improved
the `no endpoints found` error message.
If the cache is prefilled, LFS rules were still trying to query LFS
urls.
Now the strategy is to first try to fetch the files from the repository
cache (which is possible by providing an empty url list and `allow_fail`
to `repository_ctx.download`), and only run the LFS protocol if that
fails. Technically this is possible by enhancing `git_lfs_probe.py` with
a `--hash-only` flag.
This is also an optimization where no uneeded access is done (including
the slightly slow SSH call) if the repository cache is warm.
This reintroduces lazy lfs file rules that were removed in
https://github.com/github/codeql/pull/16117, now improved.
The new rules will make the actual file download go through bazel's
download manager, which includes:
* caching into the repository cache
* sane limiting of concurrent downloads
* retries
The bulk of the work is done by `git_lfs_probe.py`, which will use the
LFS protocol (with authentication via SSH) to output short lived
download URLs that can be consumed by `repository_ctx.download`.