Hacker Newsnew | past | comments | ask | show | jobs | submit | fcarraldo's commentslogin

please disclose this to the people you’re dating

This looks great! Looking forward to trying it out. I recently tried moving to OpenCode but it didn’t quite scratch the itch UX wise.


> I imagine most major news outlets don't have RSS feeds these days

I’m not aware of any that don’t. RSS is alive and well.


You can with Pyrlang or whatever other cursed implementation of Python on top of the Erlang VM you’d prefer.


I don’t think there are very many k8s clusters running 100s of pods per node. The default maximum is 110. You can, of course, scale beyond this, but you’ll run into etcd performance issues, IP space issues, max connection, IOPS and networking limitations for most use cases.

At 1M nodes I’d still expect an average of a dozen or so pods per node.


Given the unlikelihood that you need it serviced, it’s significantly less wasteful than a case made of plastic and shipped globally.


It depends on how clumsy you are. Some people just are. Sounds like you're not. Lucky for you.


Builds typically aren’t retained forever.


Also currently being discussed[0], on this very site, is both speculation that Meta is surreptitiously scanning your camera roll and a comment claiming that they worked on an earlier implementation to do just that.

It’s shocking to me that anyone who works in our industry would trust any company to do as they claim.

[0] https://news.ycombinator.com/item?id=45062910



There is an enormous gap between the behavior covered in those two cases and training machine learning models on user data that a company has specifically said it will not use for training.


OpenAI, Meta and X all train from user submitted data, in Meta and X’s case data that had been submitted long before the advent of LLMs.

It’s not a leap to assume Anthropic does the same.


By X do you mean tweets? Can you not see how different that is from training on your private conversations with an LLM?

What if you ask it for medical advice, or legal things? What if you turn on Gmail integration? Should I now be able to generate your conversations with the right prompt?


I don't think AI companies should be doing this, but they are doing it. All are opt-out, not opt-in. Anthropic is just changing their policies to be the same as their competition.

xAI trains Grok on both public data (Tweets) and non-public data (Conversations with Grok) by default. [0]

> Grok.com Data Controls for Training Grok: For the Grok.com website, you can go to Settings, Data, and then “Improve the Model” to select whether your content is used for model training.

Meta trains its AI on things posted to Meta's products, which are not as "public" as Tweets on X, because users expect these to be shared only with their networks. They do not use DMs, but they do use posts to Instagram/Facebook/etc. [1]

> We use information that is publicly available online and licensed information. We also use information shared on Meta Products. This information could be things like posts or photos and their captions. We do not use the content of your private messages with friends and family to train our AIs unless you or someone in the chat chooses to share those messages with our AIs.

OpenAI uses conversations for training data by default [2]

> When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models.

> You can opt out of training through our privacy portal by clicking on “do not train on my content.” To turn off training for your ChatGPT conversations and Codex tasks, follow the instructions in our Data Controls FAQ. Once you opt out, new conversations will not be used to train our models.

[0] https://x.ai/legal/faq

[1] https://www.facebook.com/privacy/genai/

[2] https://help.openai.com/en/articles/5722486-how-your-data-is...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: