Hacker Newsnew | past | comments | ask | show | jobs | submit | tempoponet's commentslogin

And here's the blog article describing the widget: https://maurycyz.com/misc/ads/


Remember when Netflix almost split its brand with "Quickster"? It was the dying DVD by mail service, but the whole debacle did nothing but confuse people.


True, although Netflix knew the DVD business had no permanent future anyway, so they really didn't care. If they'd picked a less silly name like "DVDflix" or something, it wouldn't have become a viral story, but either way it wouldn't have changed NFLX's fortunes.


The new Alexa uses Claude under the hood, and it also misinterprets my intent, only with a 2 second longer delay and slightly more approachable tone.


Everyone has their own hill to die on, that's the thing about personal computing. It's the same if you ask why they can't switch mobile OS. It's some seemingly trivial app or feature that almost nobody cares about.


They support their phones for years longer than any vendor. This has been widely understood for probably 10+ years at this point.

There's plenty of room for criticism without a blanket conspiracy that doesn't match what most can observe.


Support means that the manufacturer just still releases OS updates. But it says absolutely nothing about the quality of those updates: what if those updates simply degrade the situation? Every iPhone user I know says the same without conspiring with each other: it's better to stop updating to newer major OS releases for older iPhones.


I was really looking for tangible, actionable advice since I'm facing slow adoption in my org. This post seems to hide behind the "secret sauce" that it claims made all of the difference.


I wanted to reach out but I couldn't find your email. Mine's in the profile if you want to chat.


Out of curiosity, do you have thoughts on why the slow adoption?


Once local models are good enough there will be a $20 cloud provider that can give you more context, parameters, and t/s than you could dream of at home. This is true today with services like groq.


Not exactly. Those models are based on intermittent usage. If you're using an AI engineer using a sophisticated agent flow, the usage is constant and continuous. That can price to an equivalent of a dedicated cube at home over 2 years.

I had 3 projects running today. I hit my Claude Max Pro session limits twice today in about 90 minutes. I'm now keeping it down to 1 project, and I may interrupt it until the evening when I don't need Claude Web. If I could run it passively on my laptop, I would.


Anthropic used to have unlimited subscriptions, then people started running angents 24/7.

Now they have 5 hour buckets of limited use.

Groq most likely stays afloat because they're a bit player - and propped by VC money.

With a local system I can run it at full blast all the time, nobody can suddenly make it stupid by reallocating resources to training their new model, nobody can censor it or do stealth updates that make it perform worse.


Groq and Cerebras definitely have the t/s, but their hardware is tremendously expensive, even compared to the standard data center GPUs. Worth keeping in mind if we're talking about a $20 subscription.


Yet I'll tell it 100 times to stop using em dashes and it refuses.


What kind of monster would tell a LLM to avoid correct typography?


one that wants to hide he's using LLMs


That was how I read it as well, as if it had a built-in lambda type service in the cloud.

If we're just talking about some API support to call python scripts, that's pretty basic to wire up with any model that supports tool use.


Earlier this week a Cursor AI support agent told a user they could only use Cursor on one machine at a time, causing the user to cancel their subscription.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: