Hacker Newsnew | past | comments | ask | show | jobs | submit | the_arun's commentslogin

What is a strategy? You need to elaborate that in pricing.

Thank you for the feedback - agreed.

It’s an extraction pattern for a certain site, so you can reuse it. Think a pattern to extract all forum posts - then using that on different pages with the same format. Like show new, show, new posts on HN.


It is way too expensive for me as well. Yeh, world's brightest lamp is costliest to buy & maintain.

Minor typo. Author's name is - Badrish.

I understand this helps if we have our own LLM run time. What if we use external services like ChatGPT / Gemini (LLM Providers)? Shouldn't they provide this feature to all their clients out of the box?

This works with claude code and codex... So you can use with any of those, you dont need a local llm running... :)

Using this, if a first time user logs in, could we share automated scripts, that they can execute to create sample workflows?

Yes. Record an automation flow, export the code, and share it. New users can run it as-is on our infra, or pick modify and run elsewhere.

I would love to switch from Mac. But Mac hardware is so resilient & haven't seen that in PC world.

I just got a new work laptop: the ThinkPad X1 Carbon gen13. It's gorgeous: weighs a bit over 900 grams, has an amazing matte OLED screen, Intel Lunar Lake that sips power (1-2W idle) and is fast enough to compile Rust if needed, amazing keyboard, touchpad is great but I just use the trackpoint, everything works from the box on Linux (they even deliver it with either Fedora or Ubuntu, but I installed CachyOS).

Suspend: works always. Battery life: great, the whole day. Wifi: works always, connects fast, works fast.

The build quality is really nice, especially the carbon fiber body that doesn't feel so cold/hot to touch.


If you have older Mac (based on the Intel CPUs), then it may actually already work out of the box for you to run linux. I'm running Debian on Macbook Pro 2015, fully replaced the original system and I haven't looked back.

What do you mean by that? As a long term windows user I've never had any issues running my laptops and PCs for years and years.

Dell, HP and Lenovo have been phenomenally resilient for us, going back more than 2 decades.

You can run Linux on Apple Silicon with Asahi Linux

There's a whole lot of asterisks that you're leaving out of that statement.

M1&2 yes with slight caveats, m3-5 not really (at least yet)

Inspired by this, I am wondering, can a LLM play AmongUs game & win? How about tagging multiple LLMs to play with each others with humans watching?

Questions:

1. Who is responsible for adding guardrails to ensure all papers coming in are thoroughly checked & reviewed?

2. Who review these papers? Shouldn’t they own responsibility for accuracy?

3. How are we going to ensure this is not repeated by others?


reviewers are unpaid. its also quite common to farm out the actual review work to grad students, postdocs and the like. if you're suggesting adding liability, then you're just undermining the small amount of review that already takes place.

There needs to be prestige for tearing down heavily flawed work.

I don’t disagree. But have you tried estimation using Claude or Cursor? If not, give it a try.

Article resonates with me. This time around, we asked cursor to estimate giving PRD & codebase. It gave very detailed estimate. Currently in the process of getting it down to what leadership wants (as in the article). AI estimates much better & faster than us. We are bringing it down much faster than AI. Sometimes changing the PRD or prioritizing the flows & cutting down scope of MVP. Honestly AI is a great tool for estimation.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: