In my experience, code validation (unit testing, code review, manual testing, etc.) was more of a bottleneck than producing code for the most part. This means that faster code generation wouldn't produce significant gains in throughput unless the code validation speeds up too. In my workplace, I've seen evidence that the people showing the biggest productivity gains from AI coding are now shipping enormous commits that are barely getting any validation. Given the Zeitgeist, others are for some reason more lenient towards that than they normally would be (or should be).
Sorry, I wasn’t giving a serious answer. It’s just not as amusingly worded as in thought.
Seriously, though, your question is one of those “how long is a piece of string” sort of questions. Just like any other software quality question, it depends on context, competence, goals, market dynamics, organizational culture, project timelines, team expertise, etc.
Do people pay their bills on time? Do people wear seatbelts? Do people brush their teeth for the full two recommended minutes? Depends, depends, depends.
Sorry, I didn't realise you weren't the OP. I was really asking the OP as they said they had large productivity gains from using AI to code. But if you're a professional developer, the same question can be answered by you: do you specifically review all AI generated production code?
In my own case 100% of my code is reviewed by humans (generally me), and that IMO is the only sensible option, and has been the standard since I started coding commercially 33 years ago. I don't use AI to generate code though, other than a few experiments, as I don't really need to write much code these days.
If you’re going to spam your product, you could have the decency to have literally any other contributions on your account. It’s literally just advertisements and I’ll bet a straw penny they’re generated by a chatbot.
Yes, it's going to cause a lot of confusion and missed meetings. At the moment everyone says "pacific time", but now that will mean two different things.
I think we'll need to say Vancouver time or California time.
In my professional experience, having needed to work with relatively unsophisticated people across many time zones, the only thing that worked consistently was "[City] time". That way people could always check 'what time is it in X now' or 'when it's X in [City], what time is it here', and get correct responses.
Descriptors like "Mountain time" are too vague, especially when there are various places that do/do not practice DST within that timezone, or there are similarly named time zones internationally. (Australia has Eastern and Central time too, for example, and in summer - which is northern hemisphere winter - they split into four different time zones due to varying DST practices.)
Trying to be overly clever and exactly specify the time zone, e.g. "MDT", leads to lots of subtle mistakes in my experience. Often people will think they know what that is, and then get it wrong. Or their calendar app will helpfully suggest MST and they'll click on it, not noticing the difference. Or they'll just scramble the letters when writing them down and wind up with "NTT time" or "AT&T time" or some such.
EST is Eastern Standard Time. Most of Europe is on CET or CEST depending on time of year. (Somewhat confusingly, the 'S' in that case refers to Summer rather than Standard!)
Mountain time is ambiguous due to Arizona, and yet we still use that phrase. Hawaii-Aleutian time is also ambiguous: the Aleutian islands do daylight savings, but Hawaii doesn't.
Casual speech doesn't use the city names (like America/Los_Angeles for pacific time); presumably we'd have Pacific time (America/Los_Angeles) and BC time (an update of the existing America/Vancouver). If Washington's time change ever gets approved it would presumably become simply Washington time (America/Seattle maybe?).
I feel that it really just gives an explanation of decoherence, but doesnt offer any testable hypothesis for darwinian pruning and collapse to pointer states.
However, it still doesn't really address the core question of when the collapse actually occurs. All it really seems to add is that the environment is an "observer" and that decoherence actually causes the collapse.
reply