Github is already a bazel package management system though? If the package is a bazel workspace all you need to do is add a http_archive rule pointing to that github repo
That would work if, like golang, bazel was the "default" package manager for everyone. Right now it's not easy to get, for example, vulkan or muslc or qt as a bazel package.
It's also not easy to publish a version of your package (A) that depends on another package (B). This would create a diamond-problem like situation where your package (C) depends on both packages (A->B, C->A, C->B). So, some code needs to resolve these issues and reproducible identify the exact hashs of everything to pull in to make it a not-manual process.
Also, something great about the design docs linked in my other post: there's a presubmit.yaml standard so, pulling in a library, will include tests that bazel will run for whatever arch you're compiling for. For instance, say you pull in sqlite and need to build it for RISC-V. Before you just needed to hope that sqlite worked correctly on your arch, now you'll be able to test those situations in CI with RBE runners for all architectures.
> That would work if, like golang, bazel was the "default" package manager for everyone. Right now it's not easy to get, for example, vulkan or muslc or qt as a bazel package.
I agree, but I don't think a "Bazel management system" would solve this issue, because the problem is people buying into bazel in the first place
> It's also not easy to publish a version of your package (A) that depends on another package (B). This would create a diamond-problem like situation where your package (C) depends on both packages (A->B, C->A, C->B). So, some code needs to resolve these issues and reproducible identify the exact hashs of everything to pull in to make it a not-manual process.
This is a good point. However, I think realistically, effort would be better spent currently on making it easy to bazelize existing code. I have (unfortunately) never been able to pull an external library without manually bazelizing it, and this only actually ends up becoming a problem when bazel picks up enough momentum in OSS that you are likely to find a external library that is already bazelised.
How do the existing repos that do have this dependency structure solve the problem? For example there are loads of packages that depend individually on Abseil. If my package uses Abseil and it uses tcmalloc, it also uses Abseil by way of tcmalloc, but in practice this does not seem to cause trouble.
Each dependency appears as a "repository" so as a bazel target it will look like "@<thing>//some/target:file". Everything refers to a dep by it's workspace/repository name and exports a `repository.bzl` or `workspace.bzl` file that your WORKSPACE file `load()`s and calls a function in.
It does seem a bit high touch, but don't I also have the alternative of just cloning third party code into my repo and bazelizing it myself? I've certainly seen that done, and it's what Google does internally as well.
It's possible and what I've done quite a bit when using bazel but it makes code sharing very difficult. I think the internal desire, from google, is likely between tensorflow and cloud wanting to ship code easily to the OSS world. One of the reasons PyTorch is taking off is because people can build it easily!
Not every package (especially core system packages, like zlib/openssl/glibc/...) are on GitHub and want to pull in Bazel buildfiles into their source tree. As such, there's no guaranteed canonical-upstream-repo:buildfile-repo mapping, so you need some way to organize, keep track of what's where, and make sure things work well together.