To get a commit history that makes sense. It’s not supposed to document in what order you did the work, but why and how a change was made. when I’m knee deep in some rewrite and realize I should have changed something else first, I can just go do that change, then come back and rebase.
And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.
There are use cases for merging, but not as the normal workflow.
That is just not true. Merging is so much less work and the branch history clearly indicates when merging has happened.
With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.
When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.
The master branch never gets merged, so it is linear. Finding a bug is very simple with bisect. All commits are atomic, so the failing commit clearly shows the bug.
If you want to keep track of what commits belongs to a certain pr, you can still have an empty merge commit at the end of the rebase. Gitlab will add that for you automatically.
The ”hasty conflict resolution ” makes a broken merge waaaay harder to fix than a broken rebase.
And rebasing makes you take care of each conflict one commit at a time, which makes it order by magnitudes easier to get them right, compared to trying to resolve them all in a single merge commit.
I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.
I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.
I create new branches early and often. I take upstream changes as they land on the trunk.
Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.
At the end, I create a nice branch name and there is usually only one commit before code review.
Any subsequent commits and the branch are inherently rebased on the squashed commit.
Rebasing is kind of a short hand for cherry-picking, fixing up, rewording, squashing, dropping, etc. because these things don't make sense in isolation.
I don't want to see any irrelevant history several years later, so I enforce linear history on the main branch in all projects that I work on. So far, nobody complained, and I've never seen a legitimate reason to deviate from this principle if you follow a trunk based release model.
Your real commit history is irrelevant. I don't care too much about how you came to a particular state.
The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.
Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).
why would you lose commit history? You are just picking up a set of commits and reapplying them. Of course you can use rebase for more things, but rebase does not equal losing commit history.
Unlike some other common operations that can be easily cargo-culted, rebasing is somewhat hard to do correctly when you don't understand git, so people who don't understand git get antagonistic towards it.
If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.
hardened images are cool, definitely, but I'm not sure what it actually means? just systems with the latest patches or stricter config rules as well?for example: would any of these images have mitigated or even prevented Shai-Hulud [12]?
Docker Hardened Images are built from scratch with the minimal packages to run the image. The hardened images didn't contain any compromised packages for Shai-Hulud.
Hardened base images don't restrict what you add on top of them. That's where scanners like Docker Scout, Trivy, Grype, and more come in to review the complete image that you have built.
The difference is that they’re charging extra for it, so people want to see benefits they could take to their management to justify the extra cost. The NPM stuff has a lot of people’s attention right now so it’s natural to ask whether something would have blocked what your CISO is probably asking about since you have an unlimited number of possible security purchase options. One of the Docker employees mentioned one relevant feature: https://socket.dev/blog/socket-firewall-now-available-in-doc...
Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.
a company, I worked for, had their own endpoint which you can easily introduce in windows, unfortunately every other tls connection which does not use the windows certificate store breaks because of that, so maven, npm et al won't work
I've seen several serveral introduction of new ERPs in companies, usually they wanted the same processes they had just with the new software, the customizing turned out be a nightmare as the consultants usually accpeted their wishes and the programmers had to bend the ERP-system accordingly, never was in budget or in time
the left-pad fiasco seems to have been the only time npm changed a policy and reacted to a security problem, since then it seems that supply chain attacks just belong to the npm-eco-system
reply