- you have an automatic, managed GitHub API token which is useful for automating releases and publishing artifacts and containers
- if your software is cross platform you can run jobs across a variety of OSes and CPU architectures concurrently, e.g. building and testing natively on all platforms
- you have access to a lot of contextual information about what triggered the job and the current state of the repo, which is handy for automating per-PR chores or release automation
- You can integrate some things into the GitHub Web UI, such as having your linter annotate the PR line-by-line with flagged problems, or rendering test failures in the web page so you don't have to scan through a long log for them
- You have a small cache you can use to avoid redownloading/rebuilding files that have not changed between builds
Ideally you do as much as possible in a regular tool that runs locally (make/scripts/whatever) and you use the GitHub CI config for the little bit of glue that you need for the triggers, caching and GitHub integrations
Pipelines are usually just a list of sequential steps. I have been working with a lot of different CI/CD tools and they are among the easiest thing to move from one to another.
One example: for my personal Python projects, I use two GitHub actions named `pypa/gh-action-pypi-publish` [0] and `sigstore/gh-action-sigstore-python` [1] to sign my wheels, publish my wheels to PyPI, and have PyPI attest (and publicly display via check mark [2]) that the uploaded package is tied to my GitHub identity.
How would I even begin migrating this to another forge?
And that’s just a small part of the pipeline.
This is only a small part, but FWIW: you don’t need gh-action-sigstore-python to do the signing; gh-action-pypi-publish will do it automatically for you now.
(Source: I maintain the former and contributed the attestations change to the latter.)
sigstore is not a github action specific tool, you can use the python client with any CI/CD tool runner. You can attest with py_pi attestations and publish with twine.
When migrating the steps don't have to use the same syntax and tools, but for each step you can identify the desired outcome and create it without actions from the gh marketplace on a different CI/CD.
More importantly, you consciously decided to make your pipeline not portable by using gh actions from the marketplace. This is not a requirement nor inevitable.
The question was "GitHub CI YAML vs. pipeline.sh", not "GitHub CI YAML vs. other forge’s YAML."
What I’m trying to say is that if you keep your build logic in `pipeline.sh` (and use GitHub CI only for calling into it), then you’re going to have an easier time migrating to another forge’s CI than in the alternative scenario, i.e. if your build logic is coded in GitHub CI YAML.
Obviously. But then you still have caching, passing data/artifacts between stages, workflow logic (like skipping steps if unnecessary), running on multiple platforms, and exposing test results/coverage to the system you are running in.
Written properly, actually building the software is the least of what the CI is doing.
If your build is simple enough that you don’t need any of that - great. But pretending that the big CI systems never do anything except lock you in is a trifle simplistic.