LetsEncrypt is on my end of year Donate list for the past 5 years. With all modern browsers requiring HTTPS everywhere, a world without Let's Encrypt would be really difficult for indie developers.
The core of Apple's problem boils down to apathy towards their product quality. I just recently switched from using Siri to Google Gemini in my car. The experience is dramatically better.
And this is the case across the board.
My friend's Fitbit works way better than my Apple watch.
Third and final example is how bad Apple's native dictation engine is. I can run OpenAI Whisper models on my Mac and get dramatically better output.
As a long time Apple fan who's had everything since before the first iPhone, I feel this apathy towards product quality cannot be disguised as some strategic decision to fast follow with AI.
> My friend's Fitbit works way better than my Apple watch.
My husband has a Fitbit and it's so buggy he left it sit on the shelf most of the time - the only times he'd wear it is for exercise.
Siri is bad though, but I have found Google Voice Assistant and Alexa both really have become bad over time, to the point of us just giving up on them completely. My husband is on Android and I'm really surprised how bad voice assistant is despite all the Gemini launches! (mind you he has an Australian accent)
>> My friend's Fitbit works way better than my Apple watch.
That's odd because I've used both, along with a bunch other wearables (e.g. Whoop), and I wouldn't give up my Apple Watch for anything. Massively useful, can take calls, make payments, stream music from my Apple playlists, read and reply to messages, and a ton of other things.
The wearos devices can do all that stuff too, and fitbit is kind of getting blended into those devices piece by piece -- so after years of Fitbit use I can say that the best fitbit device i've had is ... a Pixel Watch 4.
I mention this because , at least for the functionalities that you mention, I think the pixel watches are catching up nicely.
... but they still haven't been able to make me feel less stupid talking into a watch for phone calls like some off-brand James Bond wannabe, even if it works great.
But for everything else, you literally just said, the handful of AI features are better on Google products... That seldom makes the product as a whole better.
You're arguing about product quality by using product availability examples.
Siri isn't competing with Gemini, yet.. Siri is old tech, Gemini is the new tech.
Same with dictation.
Siri hasn't been updated generationally with SOTA to compete with Gemini yet.. it simply hasn't been updated. This is part of the "slow pace" that the post is talking about (part of, not entirely the slowness though).
For example, Amazon updated my old Echo dots with Alexa+ beta, and it's pretty good. I have Grok in my Tesla, and though I don't like Grok or xAI, it's there and I use it occasionally.
Apple hasn't done their release of these things yet.
How so? Their brand new Siri _is_ available. I am using their Apple intelligence on my new iPhone. They even have half baked ChatGPT integrations everywhere. They got into lot of trouble last year for running ads for overselling what their new siri can do.
Overselling abilities is for sure a lack of quality.
The new Apple Intelligence version of Siri isn't out yet. It's scheduled to arrive with iOS 26.4 in early/mid 2026.
My assertion is that Apple hasn't yet released a generational complement to Gemini or ChatGPT voice modes. That's a problem, but one specifically of availability and release, which.. again (and despite the downvoters).. matches the assertion of the post ("slow AI pace").
If/when new Siri in 26.4 comes out and it sucks, then that'd be an issue of quality.
No, when I bought my first iphone, Siri could start a chronometer. Then it couldn’t for 5 years, and today it can again. It’s a big flaw for a product which can barely do anything else.
I only have Apple product because it’s good build quality. But it’s quite bad products.
I think Apple secretly doesn’t want more market share, to avoid anticompetitive accusations.
I've grown so used to Apple shipping buggy software that I wait a year or more before upgrading my mac to a major version. I do all the minor releases and security patches, of course.
I owe a large part of my career success to PHP when I learned it back in the day. But recently I picked it up because I had to do some maintenance work and The package management experience was really, really bad.
I really think there's a big opportunity for somebody to create the astral.sh for PHP.
With a proper package manager, PHP can do way more than what it presently can.
I cancelled my coderabbit paid subscription, because it always worries me when a post has to go viral on HN for a company to even acknowledge an issue occurred. Their blogs are clean of any mention of this vulnerability and they don't have any new posts today either.
I understand mistakes happen, but lack of transparency when these happen makes them look bad.
Both articles were published today. It seems to me that the researchers and coderabbit agreed to publish on the same day. This is a common practice when the company decides to disclose at all (disclosure is not required unless customer data was leaked and there's evidence of that, they are choosing to disclose unnecessarily here).
When the security researchers praise the response, it's a good sign tbh.
The early version of the researcher's article didn't have the whole first section where they "appreciate CodeRabbit’s swift action after we reported this security vulnerability" and the subsequent CodeRabbit talking points.
> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.
This is still ultra-LLM-speak (and no, not just because of the em-dash).
A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...
Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.
I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.
I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.
Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.
They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.
However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.
Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?
> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me
Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.
They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.
Most security bugs get fixed without any public notice. Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements. And there's no real benefit to doing it either. Why would you expect it to happen?
> Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements.
If the company is regulated by the SEC I believe you will find that any “material” breach is reportable after the determination of materiality is reached, since at least 2023.
Sure. And these types of "we fixed it and confirmed nobody actually exploited it" issues are not always treated as material. You can confirm that for example by checking SEC reports for each cve in commercial VPN gateways... or lack of.
Loading that gist works for me on both Firefox and Chrome.
You can submit a bug report on GitHub with more environment details, screenshots, and console logs (if available) and I might be able to take a closer look.
While the speeds are great, in my experience with Cerebras, its really hard to get any actual production level rate limits or token quantity allocations. We cannot design systems around them and we use other vendors.
We've spoken to their sales teams, and we've been told no.
This is awesome. I always ask people to code in phone interviews and I use Googe Docs.
You will be surprised at the number of people who have good resumes but cannot code at all. Watching someone code even a simple Fizz Buzz problem will help you understand the person's programming abilities better.
reply