Hacker Newsnew | past | comments | ask | show | jobs | submit | hdhdhsjsbdh's commentslogin

Acidified oceans, poisonous air, and frequent multibillion dollar extreme weather events are a small price to pay for a purely hypothetical $2,400 off my next car, which I am forced to own because the same companies that lobby against climate change regulations are the ones that tore up all the public transit infrastructure that would otherwise allow me not to own a car at all. Americans love getting fucked by our corporate overlords, we can’t get enough of it, it’s our way of life.

The US seems culturally ill-equipped to deal with this reality. We have encouraged several generations of people to channel all of their talents into maximizing their individual income, regardless of externalities or impact to their community. There is low trust and minimal social reward for giving back. We idolize the loudest, most ignorant voices only because they are wealthy and famous. In my own work with the next generation of tech workers, this seems obvious. The younger generations see it as a zero sum game. You only win by making as much money as possible, and the ends justify the means.


I think they should. Let’s kick off some meaningful economic growth in Europe and provide a counter to the increasingly hegemonic, anti-human US tech oligarchs that have reaped all of the financial rewards of algorithmic radicalization and surveillance capitalism for the past 20 or so years. Maybe Europe can imagine something better.


That would require some hard choices and actual hard work. It’s got to get a lot worse before it gets better.


I don't know, you might be underestimating how much damage the orange in charge is really doing to the interests of the US. Change is slow, and the subtle things set in motion are always perceived too late. A simple example would be a small county in germany saving 5+ million a year thanks to moving away from microsoft. Add that to the budget of the many (largely european) opensource projects out there , and you can see things can shift, slowly, but rapidly once noticed.


No, I just think we’re underestimating how bad it’s gonna get. The lag of understanding is real.

People are still waffling. It’s got to get bad enough there won’t be any waffling.


Europe needs to roll back all of the socialism if it wants to compete with the US and China. European tech is never going to keep pace if the people who build it only work 35 hours a week and take a year of paternity leave every time they have a kid.


With decades of education cuts, top STEM researchers leaving the country, and immigration coming to a halt, I think you overestimate the future competitive position of the US.


No we do not need to roll back on our humanity. The US population however really need to wake up and start unionize and vote for politicians that are not big orange incompetent babies.


How do they compete for actual tech then? Like Airbus.

- 35h a week, doesn’t prevent engineers from working more legally (most do) - with the age of AI code velocity is no more about time spent, but fresh brain - And much much more important, it is significantly more efficient to have an employee 10 year in one place than 2 years in 5 places. What could explain higher US turnover than europe, you think?


Here’s the difference between US and Europe: in US tech, productivity gains due to AI will lead to lower employment and higher expectations for the remaining employees. Salaries will remain the same and any increases in profit will of course go straight to the capital owning class. It will continue to be great for a vanishingly small number of people. On the contrary, Europe’s “socialism” makes them well-prepared to deliver the same level of productivity with AI using more people working fewer hours. And their “socialist” attitude toward where that value should go will result in an increased standard of living for everyone. You know, like the AI utopia we’ve all been promised.


Maybe having an endless runway isn’t such a good thing; when you don’t have real constraints at play, you can afford to waste more time on ego and drama.


As time goes on, tech seems to become increasingly detached from the lifestyles of normal people. AI friends, automated gift-giving, sunglasses you can talk to. Nobody wants this — it doesn’t meet people where they’re at and resolve real friction in their lives — but billions will be spent convincing us otherwise.

Maybe this is a side effect of tech workers themselves becoming more detached from the rest of the population. You are statistically unlikely to get a job at Google or Meta if you were not cultivated from day one as a high-achieving box ticking grinder. Anything that does not contribute to TC maximization is unimportant here. Beauty, human experiences, and other such intangibles are irrelevant in that worldview.

SV didn’t used to be this way; there were all different manners of perspective and smarts, which led to genuine innovation. Now we are dominated mostly by a hybrid of hyper-efficient, paperclip maximizing engineering and sociopathic MBA share price optimization.


Interesting that this thread mostly consists of people sharing vibe-coded projects they themselves made, not people sharing other people’s projects that they’ve found useful. The latter would provide higher confidence that the project is actually impressive. It’s the IKEA effect for software.


Vibe coding is a great for 'home cooked software.' Lots of people are making tools that fill a particular need for themselves.

https://www.robinsloan.com/notes/home-cooked-app/

As for sharing a tool that someone else has made that's useful, I don't think most people are advertising that the tools they've built are vibe-coded, so it would be hard to know what to share.


How would people know if someone else's project was vibe-coded?

Sometimes that will be in the readme, but there's no reason it has to be. And most people aren't going to be checking anyways.


I think Claude Code itself is mostly vibe coded if you consider that impressive. In general I’m not aware of any vibe coded projects with a substantial amount of iteration put into it rather than being close to one shot, which is why I shared mines. Any project someone can one shot is also something anyone else can vibe code for themselves.


> I think Claude Code itself is mostly vibe coded

What definition of "vibe coding" are you using here? I seriously doubt Claude Code was made "mostly" (or even "partly") by telling a prompt what they want, and accepting the output when it looks functionally acceptable without regard to how the code looks or works under the hood.

... Which is what "vibecoding" is.

Vibecoding != AI assisted/agentic coding


I don't know if or how you've been using AI for coding, but Claude Code is almost certainly being used to develop itself, and I have a hard time imagining their devs not having the best setup possible to make it as easy as possible for Claude Code to iterate on itself with minimal oversight. Once a codebase is well structured and you supply a well thought out meta prompt with something like CLAUDE.md, then there's no reason to check the code output/changes beyond a quick skim.


k again... "Using Claude Code" is not the same thing as vibecoding.


Her parents know someone at the Atlantic, and she needs publications to pad out her Harvard application :)


How uninspired are we that the best we can hope for is to be served popcorn by robots? I look forward to a future where the mentally ill in Los Angeles don’t live in tents by ditches; where blue collar migrant workers aren’t snatched off the streets or racially profiled; where human dignity takes precedence over money for a few people. Sorry but it is not possible to separate any Musk project, however benign, from the reality that he is a force for evil as far as human dignity and flourishing are concerned. Enjoy your rollerskating robots.


As far as IRB violations go, this seems pretty tame to me. Why get so mad at these researchers—who are acting in full transparency by disclosing the study—when nefarious actors (and indeed the platforms themselves!) are engaged in the same kind of manipulation. If we don’t allow it to be studied because it is creepy, then we will never develop any understanding of the very real manipulation that is constantly, quietly happening.


> Why get so mad at these researchers—who are acting in full transparency by disclosing the study—when nefarious actors (and indeed the platforms themselves!) are engaged in the same kind of manipulation.

I’m mad at both of them. Both at the nefarious actors and the researchers. If i could I would stop both.

The bad news for the researchers (and their university, and their ethics review board) they cannot publish anonymously. Or at least they can’t get the reputational boost they were hoping for. So they had to come clean. It is not like they had an option where they kept it secret and still publish their research somehow. Thus we can catch them and shame them for their unethical actions. Because this is absolutely that. If the ethics review board doesn’t understand that then their head needs to be adjusted too.

I would love to stop the same the nefarious actors too! Absolutely. Unfortunately they are not so easy to catch. That doesn’t mean that i’m not mad at them.

> If we don’t allow it to be studied because it is creepy

They can absolutely study it. They should get study participants, pay them. Get their agreement to participate in an experiment, but tell them a fake story about what the study is about. Then do their experiment, with a private forum of their own making, and then they should de-brief their participants about what the experiment was about and in what ways were they manipulated. That is the way to do this.


> If we don’t allow it to be studied because it is creepy, then we will never develop any understanding of the very real manipulation that is constantly, quietly happening.

What exactly do we gain from a study like this? It is beyond obvious that an llm can be persuasive on the internet. If the researchers want to understand how forum participants are convinced of opposing positions, this is not the experimental design for it.

The antidote to manipulation is not a new research program to affirm that manipulation may in fact take place but to take posts on these platforms with a large grain of salt, if not to disengage with them for political conversations and have those with people you know and in whose lives you have a stake instead


"Bad behavior is going happen anyway so we should allow researchers to act badly in order to study it"

I don't have the time to fully explain why this is wrong if someone can't see it. But let just mention that if the public is going to both trust and fund scientific research, they have should expect researchers to be good people. One researcher acting unethically is going sabotage the ability of other researchers to recruit test subjects etc.


> As far as IRB violations go, this seems pretty tame to me

Making this many people upset would be universally considered very bad and much more severe than any common "IRB violation"...

However, this isn't an IRB violation. The IRB seems to have explicitly given the researchers permission to this, viewing the value of the research to be worth the harm caused by the study. I suspect that the IRB and university may get in more hot water from this than the research team.

Maybe the IRB/university will try to shift responsibility to the team and claim that the team did not properly describe what they were doing, but I figure the IRB/university can't totally wash their hands clean


I would not consider anything that only makes people upset anywhere close to the "very bad" category.


Yeah the IRB is concerned about things like medical research. You are absolutely allowed to lie to psych research participants if you get approval and merely lying to research subjects is considered a minor risk factor.


Unless you happen to be the most evil person on the planet, someone else is always behaving worse. It's meaningless to bring up.

Even the most benign form of this sort of study is wasting people's time. Bots clearly got detected and reported, which presumably means humans are busy expending effort dealing with this study, without agreeing to it or being compensated.

Sure, maybe this was small scale, but the next researchers may not care about other people wasting a few man years of effort dealing with their research. It's better to nip this nonsense in the bud.


“How will we be able to learn anything about the human centipede if we don’t let researchers act in full transparency to study it?”


Bit of a motte and bailey. Stitching living people into a human centipede is blatantly, obviously wrong and has no scientific merit. Understanding the effects of AI-driven manipulation is, on the other hand, obviously incredibly relevant and important and doing it with a small scale study in a niche subreddit seems like a reasonable way to do it.


At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts. There's a huge volume of generative AI content on Reddit already - and a meaningfully large %ge of it follows predictable patterns. Wildly divergent writing styles between posts, posting 24/7, posting multiple long-form comments in short time periods, usernames following a specific pattern, and dozens of other heuristics.

It's not difficult to find this content on the site. Creating more of it seems like a redundant step in the research. It added little to the research, while creating very obvious ethical issues.


That would be a very difficult study to design. How do you know with 100% certainty that any given post is AI-generated? If the account is tagged as a bot, then you aren’t measuring the effect of manipulation from comments presented as real. If you are trying to detect whether they are AI-generated, then any noise in your heuristic or model for detecting AI-generated comments is then baked into your results.


The study as conducted also suffers those weaknesses. The authors didn’t make any meaningful attempt to determine if their marks were human or bots.

Given the prevalence of bots on Reddit, this seriously undermines the study’s findings.


> At least part of the ethics problem here is that it'd be plausible to conduct this research without creating any new posts.

This is a good point. Arguably though if you want people to take the next cambridge analytica or similar as something serious from the very beginning, we need an arsenal of academic studies with results that are clearly applicable and very hard to ignore or dispute. So I can see the appeal of producing a paper abstract that's specifically "X% of people shift their opinions with minor exposure to targeted psyops LLMs".


Intentionally manipulating opinions is also obviously wrong and has no scientific merit either. You don't need a study to know that an LLM can successfully manipulate people. And for "understanding the effects" it doesn't matter whether they spam AI generated content or analyse existing comments written by other users.


It’s the same logic. You just have decided that you accepted in some factual circumstances and not others. If you bothered to reflect on that, and had any intellectual humility, you might take pause at that idea.


Pretty cool that the EO cites a stat from the National Assessment of Educational Progress as evidence of the department’s failings, without acknowledging of course that the NAEP is administered by the Department of Education. I’m sure the problem will vanish once we no longer have the metrics to track it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: