CNC machines drove down operator wages. Its similar to the translator example where the machine code is written by someone else, but the person running the machine still needs to understand. Simple pushing the go button is dangerous, being able to adapt is critical.
Jobs where a machinist is in charge of large chunks of the process are rarer. Large shop will have one person setting up many machines to maximize throughput.
The biggest tell for AI writing is just being AI adjacent. I've started avoiding reading AI articles here because (surprise) they all feel like a chatGPT transcript.
> Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.
I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.
This blog post is pretty readable, but it's still obviously written with the help of an LLM.
A common trend is that LLMs lack the nuance and write everything with the same enthusiasm. So in a blogpost it'll infer things are novel or good/bad that are actually neutral.
Not a bad blogpost because of this, but you need to be careful reading. I've noticed most of the article on the HN front page are written with AI assistance.
Yeah minivans can be quite useful. A bit of a PITA to fold or remove seats, depending on the model, and typically can't tow much or really carry much weight but for the occasional large item they can work.
The real disappointment is none of the sources are linked in the text. Instead, it's just random underlined words, the classic chatgpt over formatting with lots of extra underlining and bolding.
I appreciate that a 10-15 minute long article summarizes a 25 minute video, but it's hard to hide the real author.
It's good if you have black text on white background, since your app can have good contrast without searing your eyes. People started switching to dark themes to avoid having their eyeballs seared monitors with the brightness high.
For things filmed with HDR in mind it's a benefit. Bummer things always get taken to the extreme.
I only use light themes for the most part, and HDR videos look insane and out of place. If you scroll past an HDR video on Instagram you have a, eyeball-searing section of your screen because your eyes aren't adjusted to looking at that brightness, and then once you scroll it off the screen and you have no HDR content, everything looks dim and muted because you just got flashbanged.
That does not sound enjoyable and seems like HDR abuse.
The "normal" video should aim to be moderately bright on average, the extra peak brightness is good for contrast in dark scenes.
Other comments comparing it to the loudness war ar apt. Some music streaming services are enfircing loudness normalization to solve this. Any brickwalled song gets played a bit quieter when the app is being a radio.
Instagram could enforce this too, but it seems unlikely unless it actually effects engagement.
Definitely reeks of someone who doesn't know what makes a readable blogpost and hoped the LLM did.
I was not familiar with the hardware, so I was disappointed there wasn't a picture of the device. Tried to skim the article and it's a mess. Inconsistent formatting and emoji without a single graph to visualize benchmarks.
I read the whole thing now and it's filled with slop. I don't really care about the emojis and the marketing voice too much. I do care that it's impossible to tell what the author cared about what they didn't, or if any of it is made up or extrapolated.
I bet the input to the LLM would have been more interesting.
> Verdict: Inference speed scales proportionally with model size.
Author only tried one model size and it's faster than NVIDIA's reported speed at a larger model. Not really a "Verdict".
> Verdict: 4-bit quantization is production-viable.
That's not really something you can conclude from messing around with it and saying you like the outputs.
> GPU Inference is Fundamentally Broken
Probably not? It probably just doesn't work in llama.cpp right now? Takes a while reading this to work out they tried ollama and then later llama.cpp, which I'd guess is basically testing llama.cpp twice. Actually I don't even believe that, I'm sure author ran into errors that might be a pain to figure out, but there's no evidence it's worse than that.
But then it says this is the "root cause":
ARM64 + Blackwell + CUDA 13.0 = Bleeding Edge
↓
Limited production testing
↓
Edge cases in numerical precision (inference)
↓
Memory management issues (training)
Am I to believe GPU inference is really fundamentally broken? I'm not seeing the case made here, just claims. At this point the LLM seems to have gotten confused about whether it's talking about the memory fragmentation issue or the GPU inference issue. But it's hard to believe anything from this point on in the post.
Jobs where a machinist is in charge of large chunks of the process are rarer. Large shop will have one person setting up many machines to maximize throughput.
reply