cold take speculation: the architecture astronautics of the Java era probably destroyed a lot of the desire for better abstractions and thinking over copy-pasting, minimalism and open standards
hot take speculation: we base a lot of our work on open source software and libraries, but a lot of that software is cheaply made, or made for the needs of a company that happens to open-source it. the pull of the low-quality "standardized" open source foundations is preventing further progress.
Has anyone measured whether doing things with AI leads to any learning? One way to do this is to measure whether subsequent related tasks have improvements in time-to-functional-results with and without AI, as % improvement. Additionally two more datapoints can be taken: with-ai -> without-ai, and without-ai -> with-ai
I'm only a data point, but some years ago I spent a whole year learning a mathematical book above my level at the time. It was painful and I only grasped parts of it.
I did again the same book this year, this time spending much time questioning an llm about concepts that I couldn't grasp, copy pasting sections of the book and ask to rewrite for my understanding, asking for fast visualization scripts for concepts, ask to give me corrected examples, concrete examples, to link several chapters together, etc..
It was still painful, but in 2 months (~8h-10h a day) I covered the book in many more details that what I ever could do some years ago.
Of course I still got some memories of the content from that time, and I'm better prepared as I have studied other things in the meantime. Also the model sometimes give bad explanations and bad links, so you must stay really critic about the output. (same goes for plots code)
But I missed a lot of deep insights years ago, and now, everything is perfectly clear in two months.
The ability to create instant plots for concepts that I try to learn was invaluable, then asking the model to twist the plot, change the data, use some other method, compare methods, etc..
Note: for every part, when I finally grasped it, I rewrited it in my own notes and style, and asked the model often to critic my notes and improve a bit. But all the concepts that I wrote down, I truly understand them deeply.
Of course, this is not coding, but for learning at least, LLMs were extremely helpful for me.
By this experiments I would say at least 6x speedup.
Honestly I feel I have never learned as much as I do now.
LLMs remove quite a lot of fatigue from my job. I am a consultant/freelancer, but even as an employee large parts of my job was not writing the code, but taking notes and jumping from file to file to connect the dots. Or trying to figure out the business logic of some odd feature. Or the endless googling for responses lying deep inside some github issue or figuring out some advances regex or unix tool pattern. Or writing plans around the business logic and implementation changes.
LLMs removed the need for most of it which means that I'm less fatigued when it comes to reading code, focusing on architectural and product stuff. I can experiment more, and I have the mental strength to do some leetcode/codewars exercise where incidentally I'll also learn stuff by comparing my solution to others that can then apply back to my code. I am less bored and fatigued by the details to take some time more focusing on the design.
If I want to learn about some new tool or database I'm less concerned with the details of setting it up or exploring its features or reading outdated poorly written docs, when I can clone the entire project in a git subtree and give the source code to the LLM which can answer me by reading the signature, implementation and tests.
Honestly, LLMs remove so much mental fatigue that I've been learning a lot more than I've ever done. Yet naysayers will conflate LLMs as a tool with some lovable crap vibecoding, I don't get it.
Great article. Really advances the thinking on error handling. Rust already has a head start compared to most other languages with Result, expect and anyhow (well, color_eyre and tracing), but there was indeed a missing piece tying together error handling "actionability" with "better than stack trace" context for the programmer.
With regards to context for the programmer, I still think ultimately tracing and color_eyre (see https://docs.rs/color-eyre/latest/color_eyre/) form a good-enough pair for service style applications, with tracing providing the missing additional context. But its nice to see a simpler approach to actionability.
I don't think there is anything in Go (the language) that helps achieve this - its mostly cultural. (Go creators and community being very outspoken about handling errors).
In fact, the easiest thing to do in Go is to ignore the error; the next easiest is to early-return the same error with no additional context.
It does expect you to use `wrap_err` to get the benefits, though. Which is easier to do than what Go requires you to do for good contextual errors, and even easier if you want reasonable-looking formatting from the Go version.
IMO you need both things: culture to make it happen, and technology to make it easy and reasonable looking. Rust lacks the former to some degree; Go lacks the later to some degree (see e.g. kustomize error formatting - everything ends up on a single line)
I wonder if it would've felt more natural if the "part 2s" of the puzzles became separate days instead. (Still 12 days worth of puzzles, but spread out across 24 days, with maybe one extra, smaller, easier puzzle for the last day to relax)
Then the 24h between part1 and part2 could be spent trying to predict what part2 will be, or generalize the solution to see if you can handle whatever part2 throws at you. For instance the lanternfish ( https://adventofcode.com/2021/day/6 ) one would expect to just have a higher number, so if one solved part1 iteratively one could make an educated guess for a fast part2 solve.
I was thinking the same! This would be great , have a puzzle each other day. But i trust the organizers are going to do a great job and we will have fun either way.
> In most cases, such attacks are discovered quickly and the malicious versions are removed from the registry within an hour.
By delaying the infected package availability (by "aging" dependencies), we're only delaying the time, and reducing samples, until it's detected. Infections that lay dormant are even more dangerous than explosives ones.
The only benefit would be if, during this freeze, repository maintainers were successfully pruning malware before it hits the fan, and the freeze would give scanners more time to finish their verification pipelines. That's not happening afaik, NPM is crazy fast going from `npm publish` to worldwide availability, scanning is insufficient by many standards.
Afaict many of these recent supply chain attacks _have_ been detected by scanners. Which ones flew under the radar for an extended period of time?
From what I can tell, even a few hours of delay for actually pulling dependencies post-publication to give security tools a chance to find it would have stopped all (?) recent attacks in their tracks.
I don't think thats contrary to the article's claim: the current tools are so bad and tedious to use for repetitive work that AI is helpful with a huge amount of it.
Try actually doing it, realise how very far the outcome is from what the blog posts describe the vast majority of the time, and get dread from the state of (social) media instead.
I think agents have a curve where they're kinda bad at bootstrapping a project, very good if used in a small-to-medium-sized existing project and then it goes downhill from there as size increases, slowly.
Something about a brand-new project often makes LLMs drop to "example grade" code, the kind you'd never put in production. (An example: claude implemented per-task file logging in my prototype project by pushing to an array of log lines, serializing the entire thing to JSON and rewriting the entire file, for every logged event)
hot take speculation: we base a lot of our work on open source software and libraries, but a lot of that software is cheaply made, or made for the needs of a company that happens to open-source it. the pull of the low-quality "standardized" open source foundations is preventing further progress.
reply