I find that I use it on isolated changes where Claude doesn’t really need to access a ton of files to figure out what to do and I can easily use it without hitting limits. The only time I hit the 4-5 hour limit is when I’m going nuts on a prototype idea and vibe coding absolutely everything, and usually when I hit the limit, I’m pretty mentally spent anyway so I use it as a sign to go do something else. I suppose everyone has different styles and different codebases, but for me I can pretty easily stay under the limit without that it’s hard to justify $100 or $200 a month.
That's not face recognition. That's face detection. It just detects faces and sticks a label from a pre-selected list. Come on, this doesn't even pass the basic smell test. "Facial recognition" my ass. It doesn't recognize anyone. I could build this in a cave with scraps. There's a huge difference between the two: recognition means you have found a known person, detection means you found a person.
That's about the difference between eating sodium chloride and eating sodium.
This kind of privacy slop is overly popular in tech circles. Each participant just posts uninformed garbage and then they link to each other with “citations” for sources that are wholly made up. It’s really reducing the quality of information on this website that it’s now full of junior engineers and interns.
Those guys always obsess over CVEs and privacy and they’re always wrong about everything but have learned to mimic the language of people who know stuff. “There’s some evidence” / “here’s a source”. Ugh. Can’t stand it.
I wrote my own flashcard app and had a very basic import from Anki feature and I have to admit that I underestimated how Anki handles it. My first attempt at import was very naive and sort "flattened" the imported data into simple front/back content. It lost a lot of fidelity from the original Anki data.
After investigating the way Anki represents its flashcards a bit more, I can really appreciate the way Anki uses notes, models, and templates to essentially create "virtual cards" (my term).
I suspect other people creating their own flashcard apps underestimate the data model Anki uses and have a hard time matching their own data model with Anki's, which may be why decent import options are hard to find. If someone wants to support Anki deck import, they have to essentially use the same data model to represent notes and models (plus cloze deletions). I'm now adopting Anki's model for my flashcard app for better import fidelity.
Regarding the SQLite data format, I was thinking it would be great if there were a text-based format instead for defining the deck and its contents as that would make it much easier to collaborate on shared decks on GitHub, like you suggest. It would be great to have a community work on essential flashcard decks together in an open format that encourages branching and collaboration. I know some groups do this with Anki decks, but I can't imagine the SQLite file format makes it easy to collaborate.
I don't think it would be that hard to come up with a universal text file-based format for a flashcard deck that supports notes, models, templates, and assets. For instance, we could have each note placed in its own text file and have the filename encode the a unique ID of that particular note. Having unique identities for everything would make it easier to re-import updated decks to apply new updates if you had previously imported the deck. The note files could also be organized into sub-folders to make it easier to organize groups of info that should be learned together.
I think a lot of times, people are here just to have a conversation. I wouldn't go so far as to say someone who is pontificating and could have done a web search to verify their thoughts and opinions is being lazy.
This might be a case of just different standards for communication here. One person might want the absolute facts and assumes everyone posting should do their due diligence to verify everything they say, but others are okay with just shooting the shit (to varying degrees).
I've seen this happen too. People will comment and say in the comment that they can't remember something when they could have easily refound that information with chatgpt or google.
Exactly, it was the first thing you'd do when you launched Word. Nowadays, the only option available would be "See less of Clippy" and he'd be back in the next session.
[Remind me again in an hour] [Remind me again in 15 minutes] [Changed my mind, keep him]
May everyone who makes such dialogues be afflicted with severe depression and be forced to ruminate at night about how empty they feel despite their "good" job and high salary.
I reckon it would be more like "Pay subscription to see slightly less of Clippy" with some small print explaining that "less" is relative to other people's future experience, not your current one.
People think AGI is far away, but I don't think HN commenters have this awareness:
> Cue 200 comments alternating armchair Descartes and pop neuroscience, then a top post linking a blog from 2011 that “settles it,” and a mod quietly locks tomorrow.
I don't think this is a serious test. It's just an art piece to contrast different LLMs taking on the same task, and against themselves since it updates every minute. One minute one of the results was really good for me and the next minute it was very, very bad.
I bet we could draw a throughline of the overly-dramatic writing style to TED Talks and all the way back to Steve Jobs' presentation style. The pregnant pauses. The short sentences. The holding back on making point for effect. All traced back to early-2000s product launches.
> I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend.
> Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.
AI is such an existential threat to many of us since we value our unique ability to create things with our skills. In my opinion, this is the source of immediate disgust that a lot of people have.
A few months ago, I would've bristled at the idea that someone was able to write a mobile app with AI as that is my personal skillset. My immediate reaction when learning about your experience would've been, "Well, you don't really know how to do it. Unlike myself, who has been doing it for many, many years."
Now that I've used AI a bit more, like yourself, I've been able to do more that I wasn't able to before. That's changed my perspective of how I look at skills now, including my own. I've recognized that AI is devaluing our unique skillsets. That obviously doesn't feel great, but at the same time I don't know if there's much to be done about that. It's just the way things are now, so the best I can do is lean into the new tools available and improve in other ways.
It's entirely possible that this will turn us all into much less of a special highly-compensated profession, and that would suck.
Although when you say "AI is devaluing our unique skillsets," I think it's important to recognize that even without AI, it's not our skillsets that ever held value.
Code is just a means to translate business logic into an automated process. If we had the same skillset but it couldn't make the business logic do the business, it has no value.
Maybe this is a pedantic distinction, but it's essentially saying that the "engineer" part of "software engineer" is the important bit - the fact that we are just using tools in our toolbox to get whatever "thing" needs to get done.
At least for now, it seems like actually possessing a skillset is helpful and/or critical to using these tools. They can't handle large context, and even if that changes, it still seems to be extremely helpful to be able to articulate on a detailed level what you want the AI to develop for you.
An analogy to that is that if you just put your customer in front of a development team and tell them how to make the application, versus putting a staff engineer or experienced product manager in front of them. The AI might be able to complete the project in both cases, but with that experienced person it's going to avoid a lot of pitfalls and work faster/better.
This analogy reminds me of a real-life instance where I built something that someone higher than director level basically spelled out exactly, essentially dictating the architecture to me that I was to follow. They don't really see my code, they might even hate my code, I am like an AI to them. And indeed, by dictating to me a very good architecture, I was able to basically follow that blindly and ran into very few problems.
Programming really is fascinating as a skill because it can bring so much joy to the practitioner on a day-to-day problem-solving level while also providing much value to companies that are using it to generate profit. How many other professions have this luxury?
As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.
I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.
reply