Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems to have a narrower scope than GitHub Copilot. It generates more lines of code to a more holistic problem vs. GitHub Copilot that works as a "more advanced autocomplete" in code editors. Sure Copilot can synthesize full functions and classes but for me, it's the most useful when it suggests another test case's title or writes repetitive code like this.foo = foo; this.bar = bar etc...

Having used Copilot I can assure you that this technology won't replace you as a programmer but it will make your job easier by doing things that programmers don't like to do as much like writing tests and comments.



Having used Copilot for a while, I am quite certain it will replace me as a programmer.

It appears to me that when it comes to language models, intelligence = experience * context. Where experience is the amount what's encoded in the model, and context is the prompt. And the biggest limitation on Copilot currently is context. It behaves as an "advanced autocomplete" because it all is has to go on is what regular autocomplete sees, e.g. the last few characters and lines of code.

So, you can write a function name called createUserInDB() and it will attempt to complete it for you. But how does it know what DB technology you're using? Or what your user record looks like? It doesn't, and so you typically end up with a "generic" looking function using the most common DB tech and naming conventions for your language of choice.

But now imagine a future version of Copilot that is automatically provided with a lot more context. It also gets fed a list of your dependencies, from which it can derive which DB library you're using. It gets any locatable SQL schema file, so it can determine the columns in the user table. It gets the text of the Jira ticket, so it can determine the requirements.

As a programmer a great deal of time is spent checking these different sources and synthesising them in your head into an approach, which you then code. But they are all just text, of one form or another, and language models can work with them just as easily, and much faster, than you can.

And one the ML train coding gets running, it'll only get faster. Sooner or later Github will have a "Copilot bot" that can automatically make a stab at fixing issues, which you then approve, reject, or fix. And as thousands of these issues pile up, the training set will get bigger, and the model will get better. Sooner or later it'll be possible to create a repo, start filing issues, and rely on the bot to implement everything.


Copilot is cool and all.

I didn't find reading largely correct but still often wrong code is a good experience for me, or it adds up any efficiency.

It does do a very good job in intelligently synthesize boilerplate for you, but be Copilot or this AlphaCode, they still don't understand the coding fundamentals, in the sense causatively, what would one instruction impact the space of states.

Still, those are exciting technology, but again, there is a big if whether such machine learning model would happen at all.


I'm skeptical it'll replace programmers, as in no more human programmers, but agree in the sense 100% human programmers -> 50%, 25%, 10% human programmers + computers doing most of the writing of actual code.

I see it continuing to evolve and becoming a far superior auto-complete with full context, but, short of actual general AI, there will always be a step that takes a high-level description of a problem and turns it into something a computer can implement.

So while it will make the remaining programmers MUCH more productive, thereby reducing the needed number of programmers, I can't see it driving that number to zero.


It will probably change the types of things a programmer does, and what it looks like to be a programmer. The nitty gritty of code writing will probably get more and more automated. But the architecture of the code, and establishing and selecting it's purpose in the larger scheme of a business, will probably be more what programmers do. Essentially, they might just become managers for automated code writers, similar to the military's idea of future fighter pilots relating to autonomous fighters/drones as described in this article:

https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai...

Maybe. It might never get to that level though.


Yup, I think that's it exactly. I just described this in another comment as a reverse of the evolution that graphic design has undergone in bringing them into programming front-ends.

I can't wait to see how far we're able to go down that path.


I have a feeling this is the correct read in terms of progression. But I'm skeptical if it'll ever be able to synthesize a program entirely. I imagine that in the future we'll have some sort of computer language more like written language that will be used by some sort of AI to generate software to meet certain demands, but might need some manual connections when requirements are hazy or needs a more human touch in the UI/UX


> But I'm skeptical if it'll ever be able to synthesize a program entirely.

Emotional skepticism carries a lot more weight in worlds where AI isn't constantly doing things that are meant to be infeasible, like coming 54th percentile in a competitive programming competition.

People need to remember that AlexNet is 10 years old. At no point in this span have neural networks stopped solving things they weren't meant to be able to solve.


I feel like you're taking that sentence a bit too literally. I read it as "I'm skeptical that AI will ever be able to take a vague human description from a product manager/etc. and solve it without an engineer-type person in the loop." The issue is humans don't know what they want and realistically programs require a lot of iteration to get right, no amount of AI can solve that.

I agree with you; it seems obvious to me that once you get to a well-specified solution a computer will be able to create entire programs that solve user requirements. And that they'll start small, but expand to larger and more complex solutions over time in the same way that no-code tools have done.


Google Ambiguity.


repetitive code like this.foo = foo; this.bar = bar etc...

This sort of boilerplate code is best solved by the programming language. Either via better built-in syntax or macros. Using an advanced machine learning model to generate this code is both error-prone and a big source of noise and code bloat. This is not an issue that will go away with better tooling; it will only get worse.


I don't think I agree. Most people spend more time reading than writing code so programming languages should be optimized to be easier to read whereas tooling should be made to simplify writing code. New syntax or macros sounds like it would make the language harder to read. I agree that an advanced machine learning model for generating boilerplate code isn't the right approach but I also don't think we should extend languages for this. Tooling like code generators and linters are a good middle ground.


New syntax or macros sounds like it would make the language harder to read.

Often the opposite is true. For example Java records are far easier to read and understand than the pages of boilerplate that they replace.


That sounds like an issue with how Java was designed. There are plenty of languages that solve Java's boilerplate problems without adding new syntax for records.


If you’ll review my original comment, I never said new syntax. I said better syntax. If your language design leads to a lot of boilerplate in idiomatic use then it needs to be better. Adding new syntax is just putting a bandaid on the problem.


FYI+IMO: Both Ruby and Scala have excellent ways to reduce these issues that occur at the language level, and make it easier to both read and write. I don't know either way if that means you should extend languages to handle it, but at least it's definitively possible to write the language that way from the beginning.

Otherwise yup, agree with you; ML for problematic boilerplate isn't the right approach, but other code generators and linters are really good and get you most of the way there.


it is a very similar argument to the one for powerful IDEs and underwhelming languages. to be fair, it’s not necessarily fruitless - e.g. with smalltalk. i fail to see the analoguous smalltalk-style empowerment of language using AI but perhaps something is there.

anyway. programming is automation; automation of programming is abstraction. using AI to write your code is just a bad abstraction - we are used to them


I feel like you are very defensive here and I want to be sure we take time to recognize this as a real accomplishment.

Seriously though, I do doubt I can be fully replaced by a robot any time soon, it may be the case that soon enough I can make high-level written descriptions of programs and hand them off to an AI to do most of the work. This wouldn't completely replace me, but it could make developers 50x productive. The question is how elastic is the market...can the market grow in step with our increase in productivitiy?

Also, please remember that as with anything, within 5 years we should see vast improvements to this AI. I think it will be an important thing to watch.


Yesterday, I spent several hours figuring out if the business requirement for "within the next 3 days" meant 3 calendar days or 72 hours from now. Then about 10 minutes actually writing the code. Everyone thought my efforts were very valuable.


100%. What makes us what we are is the mindset (in this case, this kind of "attention to detail); that didn't change with (first) compilers, (then) scripting languages, or (future?) AI-assisted programming.

PS - Lawyers aren't even as detail-oriented as we are, it's surprising.


Really?

Maybe that's true in general because the spread in skill for being able to make a living as a lawyer and the same as a programmer depends far less on that attention to detail being a core skill. Still, I wonder if that also holds at the high levels of the profession. I get the impression that at the FAANG-level, lawyers would compare pretty favorably to programmers in detail orientation. In particular, patent and contract law.

That said, it's just my general impression of what lawyers get up to.

...Hmm, thinking about the contract law thing a bit more. Yeah, I do believe you are right. Lawyers aren't writing nearly as many extremely detail-oriented texts as programmers are on a day-to-day basis. Their jobs are much more around finding, reading, and understanding those things and building stories around them.


The GPT family has already shown more than 50x productivity increase by being able to solve not one, but hundreds and perhaps thousands of tasks on the same model. We used to need much more data, and the model would be more fragile, and finding the right architecture would be a problem. Now we plug a transformer with a handful of samples and it works.

I just hope LMs will prove to be just as useful in software development as they are in their own field.


> but it could make developers 50x productive

More likely it will translate the abstraction level by some vector of 50 elements.


If you make developers 50x more efficient, won't you need 50x fewer developers?


>If you make developers 50x more efficient, won't you need 50x fewer developers?

Developers today are 50X more efficient than when they had to input machine code on punched tape, yet the number of developers needed today is far larger than it was in those times.



But think how large of a job program that would have been.

Hundreds of people manually writing assembly and paid middle class wages. Not a compiler in sight.

In the years leading up to the singularity I’d expect to see a lot of Graeberian “Bullshit Jobs”.

Everyone knows they’re BS but as a society we allow them because we aren’t willing to implement socialism or UBI.


There's no reason to believe that we'll need another 50x more developers, though.


There isn't? I feel like there's still a ton of places software hasn't even touched and not because it doesn't make sense, but because no one's gotten to it. It's not the most profitable thing people could write software for.


Even if not, the original claim was that we may see a 50X decrease and I personally don't think that is likely, pre-Singularity anyway :)


Greater efficiency leads to greater consumption unless demand is saturated. Given software’s ability to uncover more problems that are solvable by software, we’re more likely to build 50x more software.


This happened with the introduction of power tools to set building in Hollywood back in the day - literally this same question.

People just built bigger sets, and smaller productions became financially feasible. Ended up creating demand, not reducing it.


Not necessarily. Demand may be much higher than available supply right now. Tech companies will continue to compete, requiring spending on developers to remain competitive. Software is unlike manufacturing, in that the output is a service, not a widget. Worker productivity in general has not decreased the demand for full work weeks, despite projections in the early 20th century to the contrary. Of course, it is possible that fewer developers would be needed, but I don't think it's likely, yet.


To me it's not about it's current capabilities. It's the trajectory. This tech wasn't even a thing 2 years ago. There's billions being poured into it and every time someone uses these tools there's more free training data.


The big question seems to be whether par with professional programmers is a matter of increasing training set and flop size, or whether different model or multi-model architectures are required.

It does look like we've entered an era where programmers who don't use AI assistants will be disadvantaged, and that this era has an expiration date.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: