Hacker Newsnew | past | comments | ask | show | jobs | submit | wolfi1's commentslogin

why do people rebase so often? shouldn't it be excluded from the usual workflows as you are losing commit history as well?

To get a commit history that makes sense. It’s not supposed to document in what order you did the work, but why and how a change was made. when I’m knee deep in some rewrite and realize I should have changed something else first, I can just go do that change, then come back and rebase.

And in the feature branches/merge requests, I don’t merge, only rebase. Rebasing should be the default workflow. Merging adds so many problems for no good reason.

There are use cases for merging, but not as the normal workflow.


That is just not true. Merging is so much less work and the branch history clearly indicates when merging has happened.

With rebasing, there could be a million times the branch was rebased and you would have no idea when and where something got broken by hasty conflict resolution.

When conflicts happen, rebasing is equivalent to merging, just at the commit level instead of at branch level, so in the worst case, developers are met with conflict after conflict, which ends up being a confusing mental burden on less experienced devs and certainly a ”trust the process” kind of workflow for experienced ones as well.


The master branch never gets merged, so it is linear. Finding a bug is very simple with bisect. All commits are atomic, so the failing commit clearly shows the bug.

If you want to keep track of what commits belongs to a certain pr, you can still have an empty merge commit at the end of the rebase. Gitlab will add that for you automatically.

The ”hasty conflict resolution ” makes a broken merge waaaay harder to fix than a broken rebase.

And rebasing makes you take care of each conflict one commit at a time, which makes it order by magnitudes easier to get them right, compared to trying to resolve them all in a single merge commit.


Linear history is nice, but it is lacking the conflict resolutions. They are never committed, and neither are the ”fix rebase” instances.

Having a ”fix broken merge” commit makes it explicit that there was an issue that was fixed.

Rebase sometimes seems like an attempt at saving face.


That’s the whole point. You do it properly, so there IS no conflict.

I write really poopy commit messages. Think "WIP" type nonsense. I branch off of the trunk, even my branch name is poopy like

feature/{first initial} {last initial} DONOTMERGE {yyyy-MM-dd-hh-mm-ss}

Yes, the branch name literally says do not merge.

I commit anything and everything. Build fails? I still commit. If there is a stopping point and I feel like I might want to come back to this point, I commit.

I am violently against any pre commit hook that runs on all branches. What I do on my machine on my personal branch is none of your business.

I create new branches early and often. I take upstream changes as they land on the trunk.

Anyway, this long winded tale was to explain why I rebase. My commits aren't worth anything more than stopping points.

At the end, I create a nice branch name and there is usually only one commit before code review.


Isn't your tale more about squashing than rebasing?

Any subsequent commits and the branch are inherently rebased on the squashed commit.

Rebasing is kind of a short hand for cherry-picking, fixing up, rewording, squashing, dropping, etc. because these things don't make sense in isolation.


I guess my point is that I disagree that rebasing should be shorthand for all these things that aren't rebasing.

Personally i squash using git rebase -i

I don't want to see any irrelevant history several years later, so I enforce linear history on the main branch in all projects that I work on. So far, nobody complained, and I've never seen a legitimate reason to deviate from this principle if you follow a trunk based release model.

Your real commit history is irrelevant. I don't care too much about how you came to a particular state.

The overall project history though, the clarity of changes made, and that bisecting reliably works are important to me.

Or another way; the important unit is whatever your unit of code review is. If you're not reviewing and checking individual commits, they're just noise in the history; the commit messages are not clear and I cannot reliably bisect on them (since nobody is checking that things build).


why would you lose commit history? You are just picking up a set of commits and reapplying them. Of course you can use rebase for more things, but rebase does not equal losing commit history.

I think that only the most absolutely puritan git workflows wouldn’t allow a local rebase.

Because gerrit.

But even if i wasn't using gerrit, sometimes its the easiest way to fix branches that are broken or restructure your work in a more clear way


The sum of the re-written changes still amount to the same after a rebase. When would you need access to the pre-rebase history, and to what end?

Well, sometimes you do if you made a mistake, but that's already handled by the reflog.

really; keep reading about all the problems ppl have “every time I rebase” and I wonder what tomfoolery they’re really up to

Unlike some other common operations that can be easily cargo-culted, rebasing is somewhat hard to do correctly when you don't understand git, so people who don't understand git get antagonistic towards it.

If it is something like repo for configuration management I can understand that because its often a lot of very small changes and so every second commit would be a merge, and it's just easier to read that way.

... for code, honestly no idea


hardened images are cool, definitely, but I'm not sure what it actually means? just systems with the latest patches or stricter config rules as well?for example: would any of these images have mitigated or even prevented Shai-Hulud [12]?

Docker Hardened Images integrate Socket Firewall, which provides protection from threats like Shai-Hulud during build steps. You can read our partnership announcement over here: https://socket.dev/blog/socket-firewall-now-available-in-doc...

Docker Hardened Images are built from scratch with the minimal packages to run the image. The hardened images didn't contain any compromised packages for Shai-Hulud.

https://www.docker.com/blog/security-that-moves-fast-dockers...

Note: I work at Docker


yeah, but if you would have installed with npm your software, would the postinstall script have been executed?

Hardened base images don't restrict what you add on top of them. That's where scanners like Docker Scout, Trivy, Grype, and more come in to review the complete image that you have built.

Of course? They are only concerned with the base image. What you do with it is your responsibility

This would be like expecting AWS to protect your EC2 instance from a postinstall script


The difference is that they’re charging extra for it, so people want to see benefits they could take to their management to justify the extra cost. The NPM stuff has a lot of people’s attention right now so it’s natural to ask whether something would have blocked what your CISO is probably asking about since you have an unlimited number of possible security purchase options. One of the Docker employees mentioned one relevant feature: https://socket.dev/blog/socket-firewall-now-available-in-doc...

Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.


so what is the take away message? fire only the senior devs cause they cost too much and can't use AI?

a company, I worked for, had their own endpoint which you can easily introduce in windows, unfortunately every other tls connection which does not use the windows certificate store breaks because of that, so maven, npm et al won't work


wouldn't it be more honest to call it M355 rather than M365?


and after beta?


interestingly enough the antidote to tylenol poisoning is ACC (Acetylcysteine), a drug many could have in their household


the acronym is NAC for N-Acetyl-L-Cystenine (the L is implied). With ACC you could theoretically have S-acetyl-L-Cystenine and that not the antidote !


I, for one, use Notepad++ with wine. I'm accustomed to it


I've seen several serveral introduction of new ERPs in companies, usually they wanted the same processes they had just with the new software, the customizing turned out be a nightmare as the consultants usually accpeted their wishes and the programmers had to bend the ERP-system accordingly, never was in budget or in time


the left-pad fiasco seems to have been the only time npm changed a policy and reacted to a security problem, since then it seems that supply chain attacks just belong to the npm-eco-system


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: