Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are the advantages with Github CI yaml over just a bash script, eg run: pipeline.sh ?


- you have an automatic, managed GitHub API token which is useful for automating releases and publishing artifacts and containers

- if your software is cross platform you can run jobs across a variety of OSes and CPU architectures concurrently, e.g. building and testing natively on all platforms

- you have access to a lot of contextual information about what triggered the job and the current state of the repo, which is handy for automating per-PR chores or release automation

- You can integrate some things into the GitHub Web UI, such as having your linter annotate the PR line-by-line with flagged problems, or rendering test failures in the web page so you don't have to scan through a long log for them

- You have a small cache you can use to avoid redownloading/rebuilding files that have not changed between builds

Ideally you do as much as possible in a regular tool that runs locally (make/scripts/whatever) and you use the GitHub CI config for the little bit of glue that you need for the triggers, caching and GitHub integrations


- You get an overview in the Github UI for each step and can expand/collapse each step to inspect its output.

- You can easily utilize Github actions that others have contributed in your pipeline.

- You can modularize workflows and specify dependencies between them and control parallel executions.

I'm sure there are more. But the main advantage is you don't need to implement all these things yourself.


For #1, you can output section markers from any software: https://docs.github.com/en/actions/writing-workflows/choosin... (I've only used this feature with GitLab)


Thanks, I didn't know about this!


That second one sounds more like a security risk to me than a feature.


One advantage for GitHub is that you’re less likely to migrate to another Git forge.


Pipelines are usually just a list of sequential steps. I have been working with a lot of different CI/CD tools and they are among the easiest thing to move from one to another.


One example: for my personal Python projects, I use two GitHub actions named `pypa/gh-action-pypi-publish` [0] and `sigstore/gh-action-sigstore-python` [1] to sign my wheels, publish my wheels to PyPI, and have PyPI attest (and publicly display via check mark [2]) that the uploaded package is tied to my GitHub identity.

How would I even begin migrating this to another forge? And that’s just a small part of the pipeline.

[0]: https://github.com/marketplace/actions/pypi-publish

[1]: https://github.com/marketplace/actions/gh-action-sigstore-py...

[2]: https://pypi.org/project/itchcraft/


This is only a small part, but FWIW: you don’t need gh-action-sigstore-python to do the signing; gh-action-pypi-publish will do it automatically for you now.

(Source: I maintain the former and contributed the attestations change to the latter.)


sigstore is not a github action specific tool, you can use the python client with any CI/CD tool runner. You can attest with py_pi attestations and publish with twine.

When migrating the steps don't have to use the same syntax and tools, but for each step you can identify the desired outcome and create it without actions from the gh marketplace on a different CI/CD.

More importantly, you consciously decided to make your pipeline not portable by using gh actions from the marketplace. This is not a requirement nor inevitable.


Which of any the alternatives don’t have their own unique solution?


The question was "GitHub CI YAML vs. pipeline.sh", not "GitHub CI YAML vs. other forge’s YAML."

What I’m trying to say is that if you keep your build logic in `pipeline.sh` (and use GitHub CI only for calling into it), then you’re going to have an easier time migrating to another forge’s CI than in the alternative scenario, i.e. if your build logic is coded in GitHub CI YAML.


Obviously. But then you still have caching, passing data/artifacts between stages, workflow logic (like skipping steps if unnecessary), running on multiple platforms, and exposing test results/coverage to the system you are running in.

Written properly, actually building the software is the least of what the CI is doing.

If your build is simple enough that you don’t need any of that - great. But pretending that the big CI systems never do anything except lock you in is a trifle simplistic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: