It’s an extraction pattern for a certain site, so you can reuse it. Think a pattern to extract all forum posts - then using that on different pages with the same format. Like show new, show, new posts on HN.
I understand this helps if we have our own LLM run time. What if we use external services like ChatGPT / Gemini (LLM Providers)? Shouldn't they provide this feature to all their clients out of the box?
I just got a new work laptop: the ThinkPad X1 Carbon gen13. It's gorgeous: weighs a bit over 900 grams, has an amazing matte OLED screen, Intel Lunar Lake that sips power (1-2W idle) and is fast enough to compile Rust if needed, amazing keyboard, touchpad is great but I just use the trackpoint, everything works from the box on Linux (they even deliver it with either Fedora or Ubuntu, but I installed CachyOS).
Suspend: works always.
Battery life: great, the whole day.
Wifi: works always, connects fast, works fast.
The build quality is really nice, especially the carbon fiber body that doesn't feel so cold/hot to touch.
If you have older Mac (based on the Intel CPUs), then it may actually already work out of the box for you to run linux. I'm running Debian on Macbook Pro 2015, fully replaced the original system and I haven't looked back.
reviewers are unpaid. its also quite common to farm out the actual review work to grad students, postdocs and the like. if you're suggesting adding liability, then you're just undermining the small amount of review that already takes place.
Article resonates with me. This time around, we asked cursor to estimate giving PRD & codebase. It gave very detailed estimate. Currently in the process of getting it down to what leadership wants (as in the article). AI estimates much better & faster than us. We are bringing it down much faster than AI. Sometimes changing the PRD or prioritizing the flows & cutting down scope of MVP. Honestly AI is a great tool for estimation.
reply