I mean, that's just a truism - it's not really engineering advice. Maybe Postgres is just a hammer, but when you're building a house there's a lot of nails.
If you've got to store 5 GB videos, maybe reach for object store instead of postgres. But for most uses postgres is a solid choice.
Not quite, I used it at work too - the first thing that page suggests is using `Oban.Notifiers.PG` which uses distributed erlang's Process Group implementation, not Redis. You only really need Redis if you're not running with erlang clustering, but doing that rules out several other great elixir features.
Bootlickers. The tech industry is crawling with them unfortunately - perfectly happy to bury their heads in the sand and pretend that they'll never be out of favor. The Hacker News team doesn't seem to care, this has been happening all year with important information.
> It's like being mad at your bank that somebody stole your credit card on the subway and made purchases with it.
It's being mad that a store sold me a counterfeit rolex, actually. Spotify might claim to just be a "marketplace" like every other platform these days, but they're still the ones hosting that page that passes off slop as legitimate work by another artist. Spotify has a responsibility to govern what is hosted and sold on their platform.
> He was directly told that he won't be promoted because he was a white man.
Even if that was true (I don't believe his allegation), that's just _one company_. He obviously considered himself a very intelligent and capable person, so it seems the obvious next step would be to go work basically anywhere else? The Dilbert comics never seemed to push the ideal of company loyalty, so I don't think he felt trapped by obligation there.
One only needs to look at the upper management and board of any fortune 500 to disprove the idea that only non-white women are getting promoted.
Good for them, it's nice to see some management that hasn't totally bought into this "no workers, only subscription AI bots" vision of the future that so many tech CEOs are selling.
Personally I would never pay for tabletop miniatures or lore books generated by AI. It's the same core problem as publishing regurgitated LLM crap in a book or using ChatGPT to write an email - I'm not going to spend my precious time reading something that the author didn't spend time to write.
I am perfectly capable of asking a model to generate a miniature, or a story, or a dumb comment for reddit. I have no desire to pay a premium for someone else to do it and get no value from the generated content - the only original part is the prompt so if you try to sell AI generated "content" you might as well just sell the prompt instead.
there is 30 years of existing GW art and design that a bot could repackage and regurgitate for the next decade.
they need artists to develop new things, and that means protecting the art
also means they can fight to have it excluded from AI bots and control their copyright & trademarks -- GW is notorious for chasing folks, renaming things to be more copyrightable, etc.
Cool. That sure sounds nice and simple. What do you do when the multiple LLMs disagree on what the correct tests are? Do you sit down and compare 5 different diffs to see which have the tests you actually want? That sure sounds like a task you would need an actual programmer for.
At some point a human has to actually use their brain to decide what the actual goals of a given task are. That person needs to be a domain expert to draw the lines correctly. There's no shortcut around that, and throwing more stochastic parrots at it doesn't help.
Just because you can't (yet) remove the human entirely from the loop, doesn't mean that economising on the use of the humans time is impossible.
For comparison have a look at compilers: nowadays approximately no one writes their software by hand, we write a 'prompt' in something like Rust or C, and ask another computer program to create the actual software.
We still need the human in the loop here, but it takes much less human time than creating the ELF directly.
It’s not “economizing” if I have to verify every test myself. To actually validate that tests are good I need to understand the system under test, and at that point I might as well just write the damn thing myself.
This is the fundamental problem with this “AI” mirage. If I have to be an expert to validate that the LLM actually did the task I set out, and isn’t just cheating on tests, then I might as well code the solution myself.
From a PM perspective, the main differentiator between an engineering team and AI is "common sense". As these tools get used more and more, enough training data will be available that AI's "common sense" in terms of coding and engineering decisions could be indistinguishable from a human's over time. At that point, the only advantage a human has is that they're also useful on the ops and incident response side, so it's beneficial if they're also comfortable with the codebase.
Eventually these human advantages will be overcome, and AI will sufficiently pass a "Turing Test" for software engineering. PMs will work with them directly and get the same kinds of guidance, feedback, documentation, and conversational planning and coordination that they'd get from an engineering team, just with far greater speed and less cost. At that point, yeah you'll probably need to keep a few human engineers around to run the system, but the system itself will manage the software. The advantage of keeping a human in the loop will dwindle to zero.
I can see how LLMs can help with testing, but one should never compare LLMs with deterministic tools like compilers. LLMs are entirely a separate category.
> otherwise they would have paid for the service they were "depending fundamentally" on.
It's a "*.ai" company. Deductive probably spent more human time on their fancy animated landing page than engineering their actual system. If they vibe coded most of their product, I wouldn't be surprised if they didn't even know they were using Datadog until they got the email.
Doesn’t “doxxing” refer to publicly linking a private/anonymous identity to a public one? It’s disingenuous to imply the congresswoman had a problem with “linking to a public web page.” She had a problem with linking to a public web page in the context of unmasking the anonymous identity of a US military member. The linkage, not the linking, is the doxxing.
I don’t agree with Luna here, and my exposure to this story is limited to what’s in the article. But I do agree with the GP comment regarding the lack of impartiality in this article.
It’s the military’s responsibility to protect the identity of their operators, by ensuring they don’t publish information that could lead to doxxing. If they miss something, that’s on them. And prosecuting a private citizen is a deflection of responsibility that ignores the risk of actual motivated attackers (e.g. Maduro loyalists) uncovering the same information, without publishing it to twitter, but using it to threaten or harm the doxxed victim.
If you've got to store 5 GB videos, maybe reach for object store instead of postgres. But for most uses postgres is a solid choice.
reply