I can't say that I've used gitflow in hate. That said, I always saw the full complexity of the approach to address tracking multiple concurrent releases of a product. It's extremely uncommon in our increasingly SaaS world, but I imagine having so many branches with commits moving laterally between them to be invaluable for backporting security fixes and the like.
For the rest of us, trunk-based development with feature/fix branches is more than enough.
- Change user-agent to a desktop browser - any will do
- (optional) run social fixer while you're at it
I completely understand that iOS probably won't let you do this. I've been doing this on Android and Firefox, and the web experience on a phone is... functional. Since it thinks its a desktop, the page layout doesn't always gracefully fit into a portrait form-factor. Landscape mode helps in those cases.
What I think is truly amazing is how truly rare it is to see a home console move into an arcade platform, instead of the other way around. Almost always, the home system was derived from lessons learned from more expensive, rugged, and elaborate arcade hardware.
Sometimes, this overlap was quite profound but not 100%. NeoGeo home consoles famously use the same hardware and software as their arcade counterparts, but the game cartridges were not pin-compatible. The Nintendo VS line were technically the same as a Famicom/NES, but not the same build; the software has subtle differences. Perhaps the Nintendo PlayChoice would count but again, it's not like they used NES mainboards to build those.
So, the idea of taking a Nintendo console mainboard and grafting it to SEGA-designed components so it can run in a dedicated arcade cabinet, is just wild to me.
The era of bespoke arcade hardware died in the late 90s really. They couldn’t really keep up with consoles / PC with a declining market. By the early 2000s arcades were mostly console derived, beyond the Sega trio of Naomi (Dreamcast), Chihiro (Xbox) and Triforce (Gamecube), Konami and Namco mostly used PlayStation 2 derived hardware. By the late 00’s we were mostly looking at PC based stuff.
I wrote as good an opposition as I could. Basically, I opposed it on multiple principles.
From the top, I absolutely detest this kind of censorship. But the bill states that the implementation will be defined (or rendered infeasible - yeah right) AFTER the bill passes. Said decision will be punted to a "working group" of industry folks. That alone stinks, since it places a lot of abuse potential outside of duly elected representation.
I've seen the ramifications of this "CV first" kind of engineering. Let's just say that it's a bad time when you're saddled with tech debt solely from a handful of influential people that really just wanted to work elsewhere.
> Note: you are going to get well under a 50% success rate here. Accept that most people flake. It may always feel painful (and nerds like us often are rejection-sensitive). You have to feel your feelings, accept it, and move on.
This is an incredibly good point. Like all things of this nature, I liken the process to panning for gold. In truth, you may not want to invest in people that aren't all that invested in you or the activity at hand. It stinks that the success rate is lower than chance, but it's probably better this way.
It's kind of their whole thing, when you think about it. They didn't get to where they are by playing nice with others. If you're supporting anything in the Apple ecosystem, the fix is in.
That's kind of brilliant. I had no idea that kind of thing would actually work. I always assumed that bidirectional connections were needed to allow ETH frames to function, electrically. I further assumed this applied to optical networking too.
For 100BaseFX and 1000BaseSX at least, there’s no auto negotiation for link speed etc. As long as it sees a carrier from what it thinks is the other end of the link, it’s happy.
This is a good point. I wonder how much NES emulator code is in Claude's training set? Not to knock what the author has done here, but I wonder if this is more of a softball challenge than it looks.
For me, the dividing line is how compact the language representation is, specifically if you can get the job done in one file or not.
I have no doubt that there's a lot of Go jobs that will fit in a 500 line script, no problem. But the language is much more geared towards modules of many files that all work together to design user-defined types, multi-threading, and more. None of that's a concern for BASH, with Python shipping enough native types to do most jobs w/o need for custom ones.
If you need a whole directory of code to make your bang-line-equipped Go script work, you may as well compile that down and install it to /usr/local/bin.
Also the lack of bang-line support in native Go suggests that everyone is kinda "doing it wrong". The fact that `go run` just compiles your code to a temporary binary anyway, points in that direction.
For the rest of us, trunk-based development with feature/fix branches is more than enough.
reply