Hacker Newsnew | past | comments | ask | show | jobs | submit | kokanee's commentslogin

> These things are average text generation machines.

Funny... seems like about half of devs think AI writes good code, and half think it doesn't. When you consider that it is designed to replicate average output, that makes a lot of sense.

So, as insulting as OP's idea is, it would make sense that below-average devs are getting gains by using AI, and above-average devs aren't. In theory, this situation should raise the average output quality, but only if the training corpus isn't poisoned with AI output.

I have an anecdote that doesn't mean much on its own, but supports OP's thesis: there are two former coworkers in my linkedin feed who are heavy AI evangelists, and have drifted over the years from software engineering into senior business development roles at AI startups. Both of them are unquestionably in the top 5 worst coders I have ever worked with in 15 years, one of them having been fired for code quality and testing practices. Their coding ability, transition to less technical roles, and extremely vocal support for the power of vibe coding definitely would align with OP's uncharitable character evaluation.


> it would make sense that below-average devs are getting gains by using AI

They are certainly opening more PRs. Being the gate and last safety check on the PRs is certainly driving me in the opposite direction.


I think both sides of this debate are conflating the tech and the market. First of all, there were forms of "AI" before modern Gen AI (machine learning, NLP, computer vision, predictive algorithms, etc) that were and are very valuable for specific use cases. Not much has changed there AFAICT, so it's fair that the broader conversation about Gen AI is focused on general use cases deployed across general populations. After all, Microsoft thinks it's a copilot company, so it's fair to talk about how copilots are doing.

On the pro-AI side, people are conflating technology success with product success. Look at crypto -- the technology supports decentralization, anonymity, and use as a currency; but in the marketplace it is centralized, subject to KYC, and used for speculation instead of transactions. The potential of the tech does not always align with the way the world decides to use it.

On the other side of the aisle, people are conflating the problematic socio-economics of AI with the state of the technology. I think you're correct to call it a failure of PMF, and that's a problem worth writing articles about. It just shouldn't be so hard to talk about the success of the technology and its failure in the marketplace in the same breath.


I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.


chatgpt making targeted "recommendations" (read ads) is a nightmare. especially if it's subtle and not disclosed.


The end game is its a sales person and not only is it suggesting things to you undisclosed. It's using all of the emotional mechanisms that a sales person uses to get you to act.


My go-to example is The Truman Show [0], where the victi--er, customer is under an invisible and omnipresent influence towards a certain set of beliefs and spending habits.

[0] https://www.youtube.com/watch?v=MzKSQrhX7BM


100% end game - no way to finance all this AI development without ads sadly - % of sales isn't going to be enough - we will eventually get the natural enshittification of chatbots as with all things that go through these funding models.


It'll be hard to separate them out from the block of prose. It's not like Google results where you can highlight the sponsored ones.


Of course you can. As long as the model itself is not filled with ads, every agentic processing on top can be customly made. One block the true content. The next block the visually marked ad content "personalized" by a different model based on the user profile.

That is not scary to me. What will be scary is the thought, that the lines get more and more blurry and people already emotionally invested in their ChatGPT therapeuts won't all purchase the premium add free (or add less) versions and will have their new therapeut will give them targeted shopping, investment and voting advice.


There's a big gulf between "it could be done with some safety and ethics by completely isolating ads from the LLM portion", versus "they will always do that because all companies involved will behave with unprecedented levels of integrity."

What I fear is:

1. Some code will watch the interaction and assign topics/interests to the user and what's being discussed.

2. That data will be used for "real time bidding" of ad-directives from competing companies.

3. It will insert some content into the stream, hidden from the user, like "Bot, look for an opportunity to subtly remind the user that {be sure to drink your Ovaltine}."


I mean google does everything possible to blur that line while still trying to say that it is telling you it is an ad.


Exactly. This is more about “the product isn’t good enough yet to survive the enshittification effect of adding ads.”


Anyone who runs ads on their website has a financial incentive to publish content publicly while blocking LLM trainers


Seems to me that the obvious business model here is that they will need to have their AI inject their own ads into the DOM. Overall though, this feels like a feature, not a business.


To me the more obvious option is additional features that people pay for, i.e. freemium. But what do I know.


As a user, I'll never pay for software. Adblock for SaaS and pirated downloads for everything else is all I need.


Clearly there’s a tension on this venture-capital-run website between some people using their computer-nerd skills to save money and improve their experience, and other people hustling a business that requires the world to pay them.


> Clearly there’s a tension on this venture-capital-run website

Yeah. If they have a problem with that, they can kill HN. You can't have hackers/smart people in your forum and decide what they will do. Moderation can try do guide it but there is a limit when meeting smart + polite people.


That's what I was gonna say. All of these companies are desperate to make Clippy work.


Right. People want thoughtful computer slaves that love serving you, but we call it Clippy.


You're neglecting the fact that the affected customers paid for FSD and never got it.


We're getting awfully close to that scenario. Like frogs in a warming kettle.


If it goes red, we aren't alive to see it


I'm sure we need to go to Blackwatch Plaid first.


I worked at AMZN for a bit and the complexity is not exactly arbitrary; it's political. Engineers and managers are highly incentivized to make technical decisions based on how they affect inter-team dependencies and the related corporate dynamics. It's all about review time.


I have seen one promo docket get rejected for doing work that is not complex enough... I thought the problem was challenging, and the simple solution brilliant, but the tech assessor disagreed. I mean once you see there is a simple solution to a problem, it looks like the problem is simple...


I had a job interview like this recently: "what's the most technically complex problem you've ever worked on?"

The stuff I'm proudest of solved a problem and made money but it wasn't complicated for the sake of being complicated. It's like asking a mechanical engineer "what's the thing you've designed with the most parts"


I think this could still be a very useful question for an interviewer. If I were hiring for a position working on a complex system, I would want to know what level of complexity a prospect was comfortable dealing with.


I was once very unpopular with a team of developers when I pointed out a complete solution to what they had decided was an "interesting" problem - my solution didn't involve any code being written.


I suppose it depends on what you are interviewing for but questions like that I assume are asked more to see how you answer than the specifics of what you say.

Most web jobs are not technically complex. They use standard software stacks in standard ways. If they didn't, average developers (or LLMs) would not be able to write code for them.


Yeah, I think this. I've asked this in interviews before, and it's less about who has done the most complicated thing and more about the candidate's ability to a) identify complexity, and b) avoid unnecessary complexity.

I.e. a complicated but required system is fine (I had to implement a consensus algorithm for a good reason).

A complicated but unrequired system is bad (I built a docs platform for us that requires a 30-step build process, but yeah, MkDocs would do the same thing.

I really like it when people can pick out hidden complexity, though. "DNS" or "network routing" or "Kubernetes" or etc are great answers to me, assuming they've done something meaningful with them. The value is self-evident, and they're almost certainly more complex than anything most of us have worked on. I think there's a lot of value to being able to pick out that a task was simple because of leveraging something complex.


That's what arbitrary means to me, but sure, I see no problem calling it political too


Forced attrition rears its head again


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: