Hacker Newsnew | past | comments | ask | show | jobs | submit | blinkymach12's commentslogin

The struggle I found in coding as a CTO was that executive team priorities would come along that would take precedence over my ability to maintain my coding contributions, and in my experience no developer team wants to inherit and maintain code from their CTO. I found myself ultimately more drawn to coding tasks that improved developer experience or validated proof-of-concept work.

I agree with the author that CTO positions are incredibly varied, so I appreciate them sharing what works for them personally and in their organization, even if it doesn't match what has worked for me.


I like this insight. We kind of always knew that we wanted good docs, but they're demotivating to maintain if people aren't reading them. LLMs by their nature won't be onboarded to the codebase with meetings and conversations, so if we want them to have a proper onboarding then we're forced to be less lazy with our docs, and we get the validation of knowing they're being used.


We're in a transition phase today where agents need special guidance to understand a codebase that go beyond what humans need. Before long, I don't think they will. I think we should focus on our own project documentation being comprehensive (e.g. the contents of this AGENTS.md are appropriate to live somewhere in our documentation), but we should always write for humans.

The LLM's whole shtick is that it can read and comprehend our writing, so let's architect for it at that level.


It's not just understanding the codebase, it's also stylistic things, like "use this assert library to write tests", or "never write comments", or "use structured logging". It's just as useful --- more so even --- on fresh projects without much code.


Honestly, everything I have written in markdown files as AI context fodder is stuff that I write down for human contributors anyway. Or at least stuff I want to always write down, but maybe only halfway do. The difference now is it is actually being read, seemingly understood, and often followed!


So true. I find myself doing a lot more documentation these days as it is actually having a direct visible benefit. There’s a bit of a mirage here, but hey it’s getting me to document so shhh.


... most of which would also be valuable information to communicate when onboarding new devs.


Yeah I agree. I think the best place for all this lives in CONTRIBUTING.md which is already a standard-ish thing. I've started adding it even to my private projects that only I work on - when I have to come back in 3 or 4 months, I always appreciate it.


I agree.

My current thought is that (human) contributors should be encouraged to `ln -s CONTRIBUTING.md CLAUDE.local.md` or whatever in their local checkout for their agent of choice, have that .gitignored, and all contributors (human and LLM) will read and write to the same file.

The "new" thing would be putting CONTRIBUTING.md into subfolders as appropriate - which could often be quite useful for humans anyway.


Yeah I think having a docs/contributing folder or equivalent, essentially referenced/linked in the CONTRIBUTING.md makes a bunch of sense, but I'd leave that kind of thing more or less up to the project


If there were already a universal convention on where to put that stuff, then probably the agents would have just looked there. But there's not, so it was necessary to invent one.


Reality is just that people neglected onboarding docs until LLM-based coding agents put them in a position to directly benefit from having more knowledge of the codebase explicitly written down.


Common sense takes time to sink in.


Stylistic preferences could usually be inferred by just looking at the code. Perhaps if code is mid-refactor there may be inconsistencies, but ideal ai coding agent could also look through git history


I suspect machine readable practices will become standard as AI is incorporated more into society.

A good example is autonomous driving and local laws / context. "No turn on red. School days 7am-9am".

So you need: where am I, when are school days for this specific school, and what datetime it is. You could attempt to gather that through search. Though more realistically I think the municipality will make the laws require less context, or some machine readable (e.g. qrcode) transfer of information will be on the sign. If they don't there's going to be a lot of rule breaking.


Very strong "reverse centaur" vibes here, in the sense of humans becoming servants to machines, instead of vice versa. Not that I think making things more machine-readable is a waste of time, but you have to keep in mind the amount of human time sacrificed.


Well, it wouldn't even be the first time.

We've completely redesigned society around cars - making the most human populated environments largely worse for humans along the way.

Universal sidewalks (not really needed with slow moving traffic like horses and carts - though nice even back then), traffic lights, stop signs, street crossing, interchanges, etc.


As a cyclist, I’m with you 100%. Unfortunately we’re probably going to do it again with self-driving cars, with segregated lanes, special markers, etc.


A pessimistic look at self driving cars: https://www.youtube.com/watch?v=040ejWnFkj0&t=3148s

If we end up where the video presents, humans don't deserve technology of any kind.


Why is it always "humans don't deserve"? The vast, vast majority of people have nothing to do with the capital flows and political structures that result in outcomes like this. What choice does a worker, priced out of the city where he works and forced to commute an hour each way, have in this? Yes, they can "vote" for better transit, but as we can see in California that's not enough to actually get said transit. And that's just the tip of the iceberg. The poor, the homeless, hell the vast majority of the world population that doesn't live in the 'garden': in what way do the choices of Silicon Valley, a handful of billionaires, and a small clique of DC politicians have any bearing on what they do or do not deserve?

Not to be too harsh, but this sentiment -- that the successes of the ruling class are theirs to boast, but their failures are all humanity's shame -- is so pervasive and so effective at shielding rightful blame from said ruling class that I just cannot help but push back when I see it/


Those particular signs are just stupid. The street should be redesigned with traffic calming, narrowing and chicanes so that speeding is not possible.

Slapping on a sign is ineffective


Maybe for new schools. Old schools don't have the luxury of being able to force adjacent road design changes in most cases. Also. I've frequently seen the school zones extended out in several directions away from the school to make heavily trafficked intersections feeding towards the school safer. Safer for pedestrian and motorist alike. The real world is generally never so black and white. We have to deal with that gray nuance all the time.


Of course they can. Streets get redesigned all the time. They get repaved every couple decades at worst.

I’m saying this because it seemed silly to me to be dreaming up some weird system of QR codes or LLM readable speed limits instead of simply making the street follow best practices which change how humans drive for the better _today_.


Completely agree


That seems anachronistic, form over function. Machines should be able to access an API that returns “signs” for their given location. These signs don’t need any real world presence and can be updated instantly.


Also see this happening, what does that mean for business specifications? Does it become close to code syntax itself?


I think they'll always need special guidance for things like business logic. They'll never know exactly what it is that you're building and why, what the end goal of the project is without you telling them. Architectural stuff is also a matter of human preference: if you have it mapped out in your head where things should go and how they should be done, it will be better for you when reading the changes, which will be the real bottleneck.


Indeed I have observed that my coworkers "never know exactly what it is that [we]'re building and why, what the end goal of the project is without [me] telling them"


I agree with this general sentiment, but there might be some things you want to force into the context every time via a specific agent file.


Not at all. Good documentation for humans are working well for models too, but they need so much more details and context to be reliable than humans that it needs a different style of description.

This needs to contain things that you would never write for humans. They also do stupid things which need to be adjusted by these descriptions.


One of the most common usages I see from colleagues is to get agents to write the comments so you can go full circle. :)


Unless we write down on the what we often consider implicit the LLM will not know it. There might be the option to deduce some implicit requirements from the code but unlikely 100%. Thus making the requirements explicit is the go.


Yes! That was precisely my point here: https://news.ycombinator.com/item?id=44837875


Better to work with the tools we have instead of the tools we might one day have. If you want agents to work well today, you need to build for the agents we have today.

We may never achieve your future where context is unlimited, models are trained on your codebase specifically, and tokens are cheap enough to use all of this. We might have a bubble pop and in a few years we could all be paying 5-10X current prices (read: the actual cost) for similar functionality to today. In that reality, how many years of inferior agent behavior do you tolerate before you give up hoping that it will evolve past needing the tweaks?


> We're in a transition phase today where agents need special guidance to understand a codebase that go beyond what humans need. Before long, I don't think they will.

This isn't guaranteed. Just like we will never have fully self-driving cars, we likely won't have fully human quality coders.

Right now AI coders are going to be another tool in the tool bucket.


I don't think the bar here is a human level coder, I think the bar is an LLM which reads and follows the README.md.

If we're otherwise assuming it reads and follows an AGENTS.md file, then following the README.md should be within reach.

I think our task is to ensure that our README.md is suitable for any developer to onboard into the codebase. We can then measure our LLMs (and perhaps our own documentation) by if that guidance is followed.


Have you taken a Waymo?


Waymo uses a bespoke 3D data representation of the SF roads, does it not? The self-driving car equivalent of an AGENTS.md file.


The limited self-driving cars, with a remote human operator? no, I never have.


This rather underplays the experience of riding a Waymo. Where it works, it works: you get in and it takes you to the place, no human intervention required at any point.

By analogy, the first hands-off coding agents may be like that: they may not work for everything, but where they do, they could work without human intervention.


"where it works, it works" by that metric we already have agents which don't need any guidance to program


I'd say they're closer to 2010s self-driving cars; they still need frequent human intervention, even when on the happy path, to make sure they don't make a mess of things.


This is a rather dismissive response considering the progress they’ve made over the past few years. The other commenter is correct that they use highly detailed maps but you are incorrect as they do not have a remote human operator.

I find them more enjoyable than Uber. They’ve already surpassed Lyft in SF ridership and soon they will take the crown from Uber.


you are incorrect as they do not have a remote human operator

Yes, they do, the term to search is “remote assistance operator”. e.g. https://philkoopman.substack.com/p/all-robotaxis-have-remote...


That’s phone a friend not someone remotely driving the car


That’s irrelevant though. If the system requires human intervention, then it’s not fully autonomous by definition. See https://rodneybrooks.com/predictions-scorecard-2025-january-... for example:

The companies do not advertise this feature out loud too much, but they do acknowledge it, and the reports are that it happens somewhere between every one to two miles traveled.

That’s… not very autonomous.


It’s not irrelevant because those are fundamentally different modes of operating / troubleshooting. You say they have someone drive it. I say they don’t. We aren’t arguing about pure autonomy, we are arguing about the method by which humans resolve the problems.

Furthermore 2 miles of autonomous driving is… autonomous. And over time that will become 3 then 4 then 5. Perhaps it never reaches infinite autonomy but an hour of autonomous driving is more than enough to get most people most places in a city and I’d bet you money that we’ll reach that point within a decade.


You say they have someone drive it.

I didn't say that. But they're not fully autonomous.

We aren’t arguing about pure autonomy, we are arguing about the method by which humans resolve the problems.

This whole subthread started with the assertion:

Just like we will never have fully self-driving cars...

So we did start out by discussing whether current Waymo is fully autonomous or not. It then devolved into nit-picking, but that was where the conversation started.

FWIW I agree that Waymo is an amazing achievement that will only get better. I don't know (or care, frankly) if they will ever be fully autonomous. If I could, I'd buy one of those cars right now, and pay a subscription to cover the cost of the need for someone to help the car out when it needs it. But it's incorrect to say that they don't need human operators, when they clearly currently do.


They literally drive the car.


From what I’ve read they give it context to troubleshoot. They aren’t piloting it.


> Just like we will never have fully self-driving cars, we likely won't have fully human quality coders.

“Never is a long time...and none of us lives to see its length.” Elizabeth Yates, A Place for Peter (Mountain Born, #3)

“Never is an awfully long time.” J.M. Barrie, Peter Pan


This is mostly true if the existing codebase is largely self documented, which is rare


This applies to mcp too


Here's a prompt I wrote a few days ago for codex:

  Analyze the repository and add a suitable agents.md
It did a decent job. I didn't really have much to add to that. I guess, having this file is a nice optimization but obviously it doesn't contain anything it wasn't able to figure out by itself. What's really needed is a per repository learning base that gets populated with facts the agents discovers during it's many experiments with the repository over the course of many conversations. It's a performance optimization.

The core problem is that every conversation is like ground hog day. You always start from scratch. Agents.md is a stop gap solution for that problem. Chatgpt actually has some notional memory that works across conversations. But it's a bit flaky, slow, and limited. It doesn't really learn across conversations.

That btw. is a big missing piece on the path to AGIs. There are some imperfect workarounds but a lot of knowledge is lost in between conversations. And the trick of just growing the amount of context we give to our prompts doesn't seem like it's the solution.


I see the groundhog day problem as a feature, not a bug.

It's an organizational challenge, requiring a top level overview and easy to find sub documentation - and clear directives to use them when the AI starts architecting on a fresh start.

Overall, it's a good sign when a project is understandable in small independent chunks that don't demand a programmer/llm take in more context than was referenced.

I think the sweet spot would be all agents agree on a MUST-READ reference syntax for inside comments & docs that through simple scanning forces the file into the context. eg

// See @{../docs/payment-flow.md} for the overall design.


Your prompt is pretty basic. Both Claude Code and Github Copilot having similar features. Claude Code has `init` which has a lot of special sauce in the prompt to improve the CLAUDE.md. And github copilot added a self-documenting prompt as well that runs on new repos, and you can see their prompt here https://docs.github.com/en/copilot/how-tos/configure-custom-...

Reading their prompt gives ideas on how you can improve yours.


I had the same thought as I read this example. Everything in the AGENTS.md file should just be in a good README.md file.


My READMEs don't have things like "don't run the whole test suite unless I instruct you to because it will take too long; run targeted tests instead".


Why not? "For most development we recommend running single/specific tests since the whole suite is slow/expensive." sounds like a great thing to put in the readme.


That seems exactly like something you would want to tell another developer


You're going to include specific coding style rules in your README? Or other really agent-specific things like guidance about spawning sub-agents?

They are separate for a good reason. My CLAUDE.md and README.md look very different.


Why would you publish agent specific things to your codebase? That's personal preference and doesn't have anything to do with the project.


README often contains only basic context for the project and instructions for basic tasks like running it and building it from source. If additional information for developers, like coding conventions, is short enough compared to the rest of the README then it sometimes gets added there too, but if there's a lot of it then it's frequently kept elsewhere to prevent README from getting overwhelming for end users and random people just checking out the project.


I don't think anything requires a README.md to be monolithic. They often provide the introductory material that you mention here, then link out to other appropriate files for contribution guidelines, etc.


To share the most effective workflows so people don't have to muddle around figuring out what to do?


You're going to try to tell people how to code with agents in the readme? Why?


It should not contain personal preference. It should contain project conventions.

Project guidelines, how to build your project, where to find or implement different types of features, are not personal preference. If different members of your team disagree on these things, that is a problem.


I like it! I spun up a little remixable Glitch project based on your demo so that I could play with it in a web editor. Thanks for sharing. https://glitch.com/~fullsoak


Wonderful!!! I wanted to provide a Live Demo, I tried with Deno Deploy, but I got a brick wall there because of some unintended blocker (ironically haha): https://github.com/denoland/deploy_feedback/issues/802

My next best choice was to extend the support to Bun, and then deploy it with Render.com: https://fullsoak.onrender.com/

I wasn't aware Glitch can also support Deno. So, sincere appreciations for your Glitch example <3 <3


Thanks! Publication was February 2024.


I'm very much an environmentalist and am very concerned about climate change, and I feel like this article completely put me at ease about sea level rise. I suspect this was the opposite of the intent.

I feel like in comparison to intensified weather phenomenon and especially heat waves, sea level rise sound very manageable.


Author here. That was definitely not my intent! How can a sea level rise of potentially 1.6 metres (approx 5 feet) within the next 75 years, possibly put you at ease, or sound "very manageable"?!


While I can’t speak for GP, I had a similar reaction - though admittedly I wouldn’t describe it as “relieved,” more of an “okay, good to know this isn’t anywhere near the top of the priority list” feeling.

Essentially, in comparison to the other potential/likely effects of climate change I’m aware of (mass deaths of pollinators causing a collapse of the global food infrastructure, large percentages of the world’s arable land becoming non-viable leading to mass famine, heat waves bringing lethal wet-bulb temperatures to large populated areas, collapse of the AMOC, increased wars and global conflict due to space pressure, large-scale droughts and water scarcity, etc) it just doesn’t seem that bad. It’s awful and terrifying, to be clear, but it doesn’t really compare to some of the other effects we’re going to be dealing with over the same timeframe.


Fair enough. My point in penning the article was simply: "sea level rise to date has been not much, sea level rise yet to occur is a lot". I don't disagree with your argument, that "a whole lot more sea level rise" may be the least of our problems. Although, sea level rise is interrelated with many of the other effects that you mentioned. It will result in a loss of land (duh). A lot of that lost land will be once-very-fertile land. So it will be a big part of the food insecurity picture. Increased salinity will be another bad one, and most of that extra salt will come from the risen-up sea getting into fresh water tables. And that in turn will be a big cause of fresh water scarcity. So, sea level rise is about much more than just "millions of houses (and some entire countries!) will be under water".


Logically, if sea level rise somewhere of 5-9" has resulted in imperceptible rise anywhere we care enough about to have pictures of (zero sea level rise on landmarks), then extrapolated that out to the next 100 years expected rise of 15" more, i expect there to be again almost imperceptible rise in the sea anywhere we care about.


I also had that impression.

The headline worked for me, and then it sort of teased me along with details that ultimately ended without a conclusion and a feeling of "why did I just read this?"


From memory and a little grepping of the Weekly Kiwi archives, I found: "Project Null Terminator", Aardvark, B??, Caribou, Dingo, E??, Flying Fox, Giganotosaurus, ??


Aardvark'd: 12 Weeks with Geeks[1] is a delightful time capsule of 2005 software development. It includes interviews with @pg, the Reddit founders, and other delights. It's available on youtube now.

When I interviewed at Fog Creek, they had a DVD copy of Aardvark'd in a care package in my hotel room. I watched it that night, and for my interview day in the morning it felt like everyone I interviewed with was a movie star. Sneaky plan, Joel. Well executed.

[1]: https://www.youtube.com/watch?v=0NRL7YsXjSg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: